Apr 17 23:43:44.961679 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:43:44.961722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:44.961744 kernel: BIOS-provided physical RAM map: Apr 17 23:43:44.961757 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:43:44.961768 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 23:43:44.961780 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 17 23:43:44.961795 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 17 23:43:44.961807 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 23:43:44.961820 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 23:43:44.961836 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 23:43:44.961848 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 23:43:44.961861 kernel: NX (Execute Disable) protection: active Apr 17 23:43:44.961873 kernel: APIC: Static calls initialized Apr 17 23:43:44.961886 kernel: efi: EFI v2.7 by EDK II Apr 17 23:43:44.961903 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 23:43:44.961920 kernel: SMBIOS 2.7 present. Apr 17 23:43:44.961934 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 23:43:44.961948 kernel: Hypervisor detected: KVM Apr 17 23:43:44.961962 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:43:44.961977 kernel: kvm-clock: using sched offset of 3683773535 cycles Apr 17 23:43:44.961991 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:43:44.962006 kernel: tsc: Detected 2499.994 MHz processor Apr 17 23:43:44.962021 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:43:44.962035 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:43:44.962050 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 23:43:44.962068 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:43:44.962082 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:43:44.962096 kernel: Using GB pages for direct mapping Apr 17 23:43:44.962110 kernel: Secure boot disabled Apr 17 23:43:44.962125 kernel: ACPI: Early table checksum verification disabled Apr 17 23:43:44.962139 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 23:43:44.962153 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:43:44.962168 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:43:44.962182 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:43:44.962200 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 23:43:44.962215 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 23:43:44.962229 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:43:44.962244 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:43:44.962258 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 23:43:44.962273 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 23:43:44.962293 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:43:44.962312 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:43:44.962327 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 23:43:44.962343 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 23:43:44.962359 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 23:43:44.962375 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 23:43:44.962390 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 23:43:44.962405 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 23:43:44.962423 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 23:43:44.962439 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 23:43:44.962454 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 23:43:44.962470 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 23:43:44.962485 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 23:43:44.962500 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 23:43:44.962515 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:43:44.962530 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:43:44.962545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 23:43:44.962564 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 23:43:44.962579 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 17 23:43:44.962621 kernel: Zone ranges: Apr 17 23:43:44.962650 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:43:44.962664 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 23:43:44.962680 kernel: Normal empty Apr 17 23:43:44.962695 kernel: Movable zone start for each node Apr 17 23:43:44.962711 kernel: Early memory node ranges Apr 17 23:43:44.962726 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:43:44.962746 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 23:43:44.962762 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 23:43:44.962777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 23:43:44.962792 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:43:44.962808 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:43:44.962824 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:43:44.962839 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 23:43:44.962855 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:43:44.962870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:43:44.962885 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 23:43:44.962904 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:43:44.962920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:43:44.962935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:43:44.962951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:43:44.962965 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:43:44.962981 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:43:44.962996 kernel: TSC deadline timer available Apr 17 23:43:44.963012 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:43:44.963027 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:43:44.963046 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 23:43:44.963060 kernel: Booting paravirtualized kernel on KVM Apr 17 23:43:44.963076 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:43:44.963092 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:43:44.963107 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:43:44.963123 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:43:44.963138 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:43:44.963153 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:43:44.963168 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:43:44.963188 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:44.963204 kernel: random: crng init done Apr 17 23:43:44.963220 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:43:44.963236 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:43:44.963251 kernel: Fallback order for Node 0: 0 Apr 17 23:43:44.963266 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 17 23:43:44.963281 kernel: Policy zone: DMA32 Apr 17 23:43:44.963297 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:43:44.963316 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 17 23:43:44.963332 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:43:44.963347 kernel: Kernel/User page tables isolation: enabled Apr 17 23:43:44.963363 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:43:44.963377 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:43:44.963393 kernel: Dynamic Preempt: voluntary Apr 17 23:43:44.963408 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:43:44.963425 kernel: rcu: RCU event tracing is enabled. Apr 17 23:43:44.963441 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:43:44.963460 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:43:44.963476 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:43:44.963491 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:43:44.963507 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:43:44.963522 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:43:44.963538 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:43:44.963554 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:43:44.963584 kernel: Console: colour dummy device 80x25 Apr 17 23:43:44.964700 kernel: printk: console [tty0] enabled Apr 17 23:43:44.964725 kernel: printk: console [ttyS0] enabled Apr 17 23:43:44.964742 kernel: ACPI: Core revision 20230628 Apr 17 23:43:44.964760 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 23:43:44.964784 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:43:44.964801 kernel: x2apic enabled Apr 17 23:43:44.964816 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:43:44.964834 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Apr 17 23:43:44.964851 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Apr 17 23:43:44.964871 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:43:44.964889 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:43:44.964905 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:43:44.964921 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:43:44.964938 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:43:44.964954 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:43:44.964971 kernel: RETBleed: Vulnerable Apr 17 23:43:44.964987 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:43:44.965004 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:43:44.965021 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:43:44.965040 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:43:44.965057 kernel: active return thunk: its_return_thunk Apr 17 23:43:44.965073 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:43:44.965090 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:43:44.965106 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:43:44.965123 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:43:44.965139 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 23:43:44.965156 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 23:43:44.965172 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:43:44.965189 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:43:44.965205 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:43:44.965225 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:43:44.965242 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:43:44.965258 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 23:43:44.965275 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 23:43:44.965290 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 23:43:44.965307 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 23:43:44.965323 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 23:43:44.965340 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 23:43:44.965356 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 23:43:44.965373 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:43:44.965389 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:43:44.965405 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:43:44.965425 kernel: landlock: Up and running. Apr 17 23:43:44.965442 kernel: SELinux: Initializing. Apr 17 23:43:44.965458 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:43:44.965474 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:43:44.965491 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:43:44.965576 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:44.965593 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:44.968313 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:43:44.968332 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:43:44.968348 kernel: signal: max sigframe size: 3632 Apr 17 23:43:44.968373 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:43:44.968390 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:43:44.968406 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:43:44.968422 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:43:44.968437 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:43:44.968453 kernel: .... node #0, CPUs: #1 Apr 17 23:43:44.968470 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:43:44.968487 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:43:44.968506 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:43:44.968522 kernel: smpboot: Max logical packages: 1 Apr 17 23:43:44.968538 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Apr 17 23:43:44.968553 kernel: devtmpfs: initialized Apr 17 23:43:44.968569 kernel: x86/mm: Memory block size: 128MB Apr 17 23:43:44.968585 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 23:43:44.968613 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:43:44.968629 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:43:44.968645 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:43:44.968664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:43:44.968680 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:43:44.968695 kernel: audit: type=2000 audit(1776469425.443:1): state=initialized audit_enabled=0 res=1 Apr 17 23:43:44.968711 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:43:44.968727 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:43:44.968742 kernel: cpuidle: using governor menu Apr 17 23:43:44.968757 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:43:44.968773 kernel: dca service started, version 1.12.1 Apr 17 23:43:44.968789 kernel: PCI: Using configuration type 1 for base access Apr 17 23:43:44.968808 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:43:44.968824 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:43:44.968839 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:43:44.968855 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:43:44.968871 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:43:44.968886 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:43:44.968901 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:43:44.968917 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:43:44.968933 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:43:44.968952 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:43:44.968967 kernel: ACPI: Interpreter enabled Apr 17 23:43:44.968983 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:43:44.968998 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:43:44.969014 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:43:44.969030 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:43:44.969045 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 23:43:44.969060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:43:44.969312 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:43:44.969468 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:43:44.969642 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:43:44.969662 kernel: acpiphp: Slot [3] registered Apr 17 23:43:44.969678 kernel: acpiphp: Slot [4] registered Apr 17 23:43:44.969694 kernel: acpiphp: Slot [5] registered Apr 17 23:43:44.969709 kernel: acpiphp: Slot [6] registered Apr 17 23:43:44.969725 kernel: acpiphp: Slot [7] registered Apr 17 23:43:44.969745 kernel: acpiphp: Slot [8] registered Apr 17 23:43:44.969760 kernel: acpiphp: Slot [9] registered Apr 17 23:43:44.969776 kernel: acpiphp: Slot [10] registered Apr 17 23:43:44.969791 kernel: acpiphp: Slot [11] registered Apr 17 23:43:44.969807 kernel: acpiphp: Slot [12] registered Apr 17 23:43:44.969822 kernel: acpiphp: Slot [13] registered Apr 17 23:43:44.969838 kernel: acpiphp: Slot [14] registered Apr 17 23:43:44.969853 kernel: acpiphp: Slot [15] registered Apr 17 23:43:44.969869 kernel: acpiphp: Slot [16] registered Apr 17 23:43:44.969884 kernel: acpiphp: Slot [17] registered Apr 17 23:43:44.969903 kernel: acpiphp: Slot [18] registered Apr 17 23:43:44.969919 kernel: acpiphp: Slot [19] registered Apr 17 23:43:44.969934 kernel: acpiphp: Slot [20] registered Apr 17 23:43:44.969949 kernel: acpiphp: Slot [21] registered Apr 17 23:43:44.969965 kernel: acpiphp: Slot [22] registered Apr 17 23:43:44.969981 kernel: acpiphp: Slot [23] registered Apr 17 23:43:44.969996 kernel: acpiphp: Slot [24] registered Apr 17 23:43:44.970011 kernel: acpiphp: Slot [25] registered Apr 17 23:43:44.970027 kernel: acpiphp: Slot [26] registered Apr 17 23:43:44.970045 kernel: acpiphp: Slot [27] registered Apr 17 23:43:44.970060 kernel: acpiphp: Slot [28] registered Apr 17 23:43:44.970076 kernel: acpiphp: Slot [29] registered Apr 17 23:43:44.970091 kernel: acpiphp: Slot [30] registered Apr 17 23:43:44.970106 kernel: acpiphp: Slot [31] registered Apr 17 23:43:44.970122 kernel: PCI host bridge to bus 0000:00 Apr 17 23:43:44.970268 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:43:44.970394 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:43:44.970516 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:43:44.970657 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 23:43:44.970782 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 23:43:44.970903 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:43:44.971058 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:43:44.971277 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 17 23:43:44.971424 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 17 23:43:44.971567 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:43:44.973387 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 23:43:44.973568 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 23:43:44.973725 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 23:43:44.973866 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 23:43:44.974005 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 23:43:44.974141 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 23:43:44.974305 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 17 23:43:44.974728 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 17 23:43:44.974881 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:43:44.976808 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 17 23:43:44.976991 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:43:44.977173 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:43:44.977328 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 17 23:43:44.977488 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:43:44.977983 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 17 23:43:44.978012 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:43:44.978029 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:43:44.978045 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:43:44.978061 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:43:44.978077 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:43:44.978099 kernel: iommu: Default domain type: Translated Apr 17 23:43:44.978114 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:43:44.978129 kernel: efivars: Registered efivars operations Apr 17 23:43:44.978144 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:43:44.978161 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:43:44.978177 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 23:43:44.978194 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 23:43:44.978369 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 23:43:44.978506 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 23:43:44.982563 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:43:44.982620 kernel: vgaarb: loaded Apr 17 23:43:44.982637 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 23:43:44.982653 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 23:43:44.982668 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:43:44.982684 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:43:44.982699 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:43:44.982715 kernel: pnp: PnP ACPI init Apr 17 23:43:44.982731 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:43:44.982754 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:43:44.982770 kernel: NET: Registered PF_INET protocol family Apr 17 23:43:44.982786 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:43:44.982801 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:43:44.982816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:43:44.982832 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:43:44.982847 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:43:44.982862 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:43:44.982878 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:43:44.982896 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:43:44.982912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:43:44.982927 kernel: NET: Registered PF_XDP protocol family Apr 17 23:43:44.983063 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:43:44.983182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:43:44.983298 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:43:44.983412 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 23:43:44.983526 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 23:43:44.983677 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:43:44.983698 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:43:44.983714 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:43:44.983730 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Apr 17 23:43:44.983745 kernel: clocksource: Switched to clocksource tsc Apr 17 23:43:44.983760 kernel: Initialise system trusted keyrings Apr 17 23:43:44.983775 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:43:44.983791 kernel: Key type asymmetric registered Apr 17 23:43:44.983809 kernel: Asymmetric key parser 'x509' registered Apr 17 23:43:44.983824 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:43:44.983839 kernel: io scheduler mq-deadline registered Apr 17 23:43:44.983855 kernel: io scheduler kyber registered Apr 17 23:43:44.983870 kernel: io scheduler bfq registered Apr 17 23:43:44.983885 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:43:44.983900 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:43:44.983915 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:43:44.983931 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:43:44.983948 kernel: i8042: Warning: Keylock active Apr 17 23:43:44.983963 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:43:44.983979 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:43:44.984115 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:43:44.984240 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:43:44.984359 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:43:44 UTC (1776469424) Apr 17 23:43:44.984477 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:43:44.984496 kernel: intel_pstate: CPU model not supported Apr 17 23:43:44.984516 kernel: efifb: probing for efifb Apr 17 23:43:44.984531 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 17 23:43:44.984546 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 23:43:44.984561 kernel: efifb: scrolling: redraw Apr 17 23:43:44.984577 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:43:44.984592 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:43:44.984619 kernel: fb0: EFI VGA frame buffer device Apr 17 23:43:44.984634 kernel: pstore: Using crash dump compression: deflate Apr 17 23:43:44.984649 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:43:44.984669 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:43:44.984684 kernel: Segment Routing with IPv6 Apr 17 23:43:44.984700 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:43:44.984715 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:43:44.984730 kernel: Key type dns_resolver registered Apr 17 23:43:44.984745 kernel: IPI shorthand broadcast: enabled Apr 17 23:43:44.984786 kernel: sched_clock: Marking stable (476001711, 150146320)->(717521967, -91373936) Apr 17 23:43:44.984806 kernel: registered taskstats version 1 Apr 17 23:43:44.984822 kernel: Loading compiled-in X.509 certificates Apr 17 23:43:44.984841 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:43:44.984857 kernel: Key type .fscrypt registered Apr 17 23:43:44.984873 kernel: Key type fscrypt-provisioning registered Apr 17 23:43:44.984888 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:43:44.984904 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:43:44.984920 kernel: ima: No architecture policies found Apr 17 23:43:44.984936 kernel: clk: Disabling unused clocks Apr 17 23:43:44.984952 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:43:44.984968 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:43:44.984985 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:43:44.985004 kernel: Run /init as init process Apr 17 23:43:44.985020 kernel: with arguments: Apr 17 23:43:44.985035 kernel: /init Apr 17 23:43:44.985051 kernel: with environment: Apr 17 23:43:44.985067 kernel: HOME=/ Apr 17 23:43:44.985082 kernel: TERM=linux Apr 17 23:43:44.985100 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:43:44.985120 systemd[1]: Detected virtualization amazon. Apr 17 23:43:44.985139 systemd[1]: Detected architecture x86-64. Apr 17 23:43:44.985155 systemd[1]: Running in initrd. Apr 17 23:43:44.985170 systemd[1]: No hostname configured, using default hostname. Apr 17 23:43:44.985184 systemd[1]: Hostname set to . Apr 17 23:43:44.985198 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:43:44.985212 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:43:44.985228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:43:44.985244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:43:44.985267 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:43:44.985283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:43:44.985299 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:43:44.985321 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:43:44.985343 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:43:44.985359 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:43:44.985374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:43:44.985394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:43:44.985408 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:43:44.985422 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:43:44.985439 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:43:44.985454 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:43:44.985473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:43:44.985489 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:43:44.985514 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:43:44.985531 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:43:44.985546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:43:44.985562 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:43:44.985578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:43:44.985593 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:43:44.985626 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:43:44.985642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:43:44.985658 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:43:44.985675 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:43:44.985692 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:43:44.985708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:43:44.985724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:44.985777 systemd-journald[179]: Collecting audit messages is disabled. Apr 17 23:43:44.985820 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:43:44.985837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:43:44.985854 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:43:44.985877 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:43:44.985896 systemd-journald[179]: Journal started Apr 17 23:43:44.985931 systemd-journald[179]: Runtime Journal (/run/log/journal/ec293852a2770f7878b7a13fb526ee15) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:43:44.952916 systemd-modules-load[180]: Inserted module 'overlay' Apr 17 23:43:44.996674 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:43:45.001114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:45.005785 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:43:45.011820 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:43:45.014847 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:45.016506 kernel: Bridge firewalling registered Apr 17 23:43:45.015297 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 17 23:43:45.018427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:43:45.020527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:43:45.022232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:43:45.033970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:43:45.038660 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:43:45.049925 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:45.052785 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:43:45.068004 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:43:45.070664 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:43:45.072531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:43:45.082923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:43:45.088715 dracut-cmdline[212]: dracut-dracut-053 Apr 17 23:43:45.088715 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:43:45.136987 systemd-resolved[220]: Positive Trust Anchors: Apr 17 23:43:45.137007 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:43:45.137073 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:43:45.147404 systemd-resolved[220]: Defaulting to hostname 'linux'. Apr 17 23:43:45.149845 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:43:45.150587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:43:45.178631 kernel: SCSI subsystem initialized Apr 17 23:43:45.188624 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:43:45.200628 kernel: iscsi: registered transport (tcp) Apr 17 23:43:45.221901 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:43:45.221982 kernel: QLogic iSCSI HBA Driver Apr 17 23:43:45.263475 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:43:45.270868 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:43:45.297181 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:43:45.297261 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:43:45.297284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:43:45.340646 kernel: raid6: avx512x4 gen() 17865 MB/s Apr 17 23:43:45.358625 kernel: raid6: avx512x2 gen() 17605 MB/s Apr 17 23:43:45.376626 kernel: raid6: avx512x1 gen() 17786 MB/s Apr 17 23:43:45.394624 kernel: raid6: avx2x4 gen() 17648 MB/s Apr 17 23:43:45.412626 kernel: raid6: avx2x2 gen() 17709 MB/s Apr 17 23:43:45.430887 kernel: raid6: avx2x1 gen() 13516 MB/s Apr 17 23:43:45.430960 kernel: raid6: using algorithm avx512x4 gen() 17865 MB/s Apr 17 23:43:45.449853 kernel: raid6: .... xor() 7328 MB/s, rmw enabled Apr 17 23:43:45.449920 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:43:45.471642 kernel: xor: automatically using best checksumming function avx Apr 17 23:43:45.632633 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:43:45.643125 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:43:45.653863 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:43:45.667381 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 17 23:43:45.672679 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:43:45.682489 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:43:45.700935 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Apr 17 23:43:45.732265 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:43:45.737891 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:43:45.790891 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:43:45.801987 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:43:45.829432 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:43:45.832220 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:43:45.833559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:43:45.835676 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:43:45.842855 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:43:45.871038 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:43:45.899158 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:43:45.917624 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:43:45.917706 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:43:45.920469 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:43:45.922623 kernel: AES CTR mode by8 optimization enabled Apr 17 23:43:45.926032 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:43:45.926282 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:45.929481 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:45.931693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:43:45.932002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:45.938967 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 23:43:45.936375 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:45.950425 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:7b:01:c6:0e:d9 Apr 17 23:43:45.945084 (udev-worker)[453]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:43:45.965040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:45.977624 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:43:45.979614 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:43:45.981229 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 23:43:45.979763 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:45.987698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:43:45.994615 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:43:46.005643 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:43:46.005721 kernel: GPT:9289727 != 33554431 Apr 17 23:43:46.005741 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:43:46.005759 kernel: GPT:9289727 != 33554431 Apr 17 23:43:46.005787 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:43:46.005806 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:43:46.012684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:43:46.019880 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:43:46.040928 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:43:46.085108 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (452) Apr 17 23:43:46.092640 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (456) Apr 17 23:43:46.169684 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:43:46.179660 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:43:46.185729 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:43:46.186283 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:43:46.193791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:43:46.201791 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:43:46.208093 disk-uuid[632]: Primary Header is updated. Apr 17 23:43:46.208093 disk-uuid[632]: Secondary Entries is updated. Apr 17 23:43:46.208093 disk-uuid[632]: Secondary Header is updated. Apr 17 23:43:46.213635 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:43:46.219629 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:43:46.225717 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:43:47.227700 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:43:47.229050 disk-uuid[633]: The operation has completed successfully. Apr 17 23:43:47.372582 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:43:47.372730 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:43:47.401973 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:43:47.406172 sh[976]: Success Apr 17 23:43:47.421620 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:43:47.531528 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:43:47.546744 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:43:47.548857 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:43:47.593920 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:43:47.594000 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:47.596186 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:43:47.599524 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:43:47.599578 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:43:47.627642 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:43:47.630813 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:43:47.632054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:43:47.643874 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:43:47.645897 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:43:47.676289 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:47.676364 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:43:47.676389 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:43:47.683632 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:43:47.697124 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:43:47.699647 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:43:47.707035 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:43:47.716922 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:43:47.762451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:43:47.774821 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:43:47.827929 systemd-networkd[1169]: lo: Link UP Apr 17 23:43:47.827939 systemd-networkd[1169]: lo: Gained carrier Apr 17 23:43:47.831840 systemd-networkd[1169]: Enumeration completed Apr 17 23:43:47.833165 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:43:47.833170 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:43:47.835424 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:43:47.836124 systemd[1]: Reached target network.target - Network. Apr 17 23:43:47.844388 systemd-networkd[1169]: eth0: Link UP Apr 17 23:43:47.844397 systemd-networkd[1169]: eth0: Gained carrier Apr 17 23:43:47.844414 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:43:47.876982 ignition[1109]: Ignition 2.19.0 Apr 17 23:43:47.876996 ignition[1109]: Stage: fetch-offline Apr 17 23:43:47.879161 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:43:47.877273 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:47.877288 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:43:47.877859 ignition[1109]: Ignition finished successfully Apr 17 23:43:47.884824 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:43:47.901306 ignition[1177]: Ignition 2.19.0 Apr 17 23:43:47.901319 ignition[1177]: Stage: fetch Apr 17 23:43:47.901890 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:43:47.901904 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:43:47.902031 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:43:47.902223 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:43:48.102386 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #2 Apr 17 23:43:48.102586 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:43:48.503693 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #3 Apr 17 23:43:48.503884 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:43:49.304262 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #4 Apr 17 23:43:49.304432 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:43:49.871800 systemd-networkd[1169]: eth0: Gained IPv6LL Apr 17 23:43:50.905881 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #5 Apr 17 23:43:50.906062 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:43:54.109135 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #6 Apr 17 23:43:54.109302 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:43:59.110374 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #7 Apr 17 23:43:59.110541 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:44:04.115418 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #8 Apr 17 23:44:04.115581 ignition[1177]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:44:05.503708 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.19.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:44:09.115763 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #9 Apr 17 23:44:09.158775 ignition[1177]: PUT result: OK Apr 17 23:44:09.160725 ignition[1177]: parsed url from cmdline: "" Apr 17 23:44:09.160737 ignition[1177]: no config URL provided Apr 17 23:44:09.160747 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:44:09.160763 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:44:09.160798 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:44:09.163561 ignition[1177]: PUT result: OK Apr 17 23:44:09.163639 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:44:09.168836 ignition[1177]: GET result: OK Apr 17 23:44:09.168960 ignition[1177]: parsing config with SHA512: 19ebb547cd68352624089e39b54f4b1feec545fbed72615c2a9f6811263b2df014f0d49eead58f2fd3108aab8f05874b3c905f6d7a6b270576fd4bee40b1e2bb Apr 17 23:44:09.173846 unknown[1177]: fetched base config from "system" Apr 17 23:44:09.173864 unknown[1177]: fetched base config from "system" Apr 17 23:44:09.175047 ignition[1177]: fetch: fetch complete Apr 17 23:44:09.173871 unknown[1177]: fetched user config from "aws" Apr 17 23:44:09.175061 ignition[1177]: fetch: fetch passed Apr 17 23:44:09.175124 ignition[1177]: Ignition finished successfully Apr 17 23:44:09.177868 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:44:09.182807 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:44:09.200564 ignition[1184]: Ignition 2.19.0 Apr 17 23:44:09.200579 ignition[1184]: Stage: kargs Apr 17 23:44:09.201093 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:09.201108 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:44:09.201229 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:44:09.202261 ignition[1184]: PUT result: OK Apr 17 23:44:09.207964 ignition[1184]: kargs: kargs passed Apr 17 23:44:09.208064 ignition[1184]: Ignition finished successfully Apr 17 23:44:09.210138 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:44:09.222285 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:44:09.239009 ignition[1190]: Ignition 2.19.0 Apr 17 23:44:09.239023 ignition[1190]: Stage: disks Apr 17 23:44:09.239511 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:09.239527 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:44:09.239665 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:44:09.242991 ignition[1190]: PUT result: OK Apr 17 23:44:09.247070 ignition[1190]: disks: disks passed Apr 17 23:44:09.247534 ignition[1190]: Ignition finished successfully Apr 17 23:44:09.248831 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:44:09.250002 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:44:09.250753 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:44:09.251421 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:44:09.251795 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:44:09.252377 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:44:09.262871 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:44:09.285867 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:44:09.290054 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:44:09.297974 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:44:09.400627 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:44:09.401020 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:44:09.402219 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:44:09.410780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:44:09.413845 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:44:09.415876 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:44:09.416784 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:44:09.416819 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:44:09.433671 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Apr 17 23:44:09.434188 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:44:09.439616 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:09.439682 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:09.442838 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:44:09.446036 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:44:09.451625 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:44:09.453965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:44:09.521876 initrd-setup-root[1243]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:44:09.528194 initrd-setup-root[1250]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:44:09.533386 initrd-setup-root[1257]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:44:09.538583 initrd-setup-root[1264]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:44:09.638383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:44:09.644721 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:44:09.648802 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:44:09.656780 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:44:09.657869 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:09.687012 ignition[1332]: INFO : Ignition 2.19.0 Apr 17 23:44:09.687012 ignition[1332]: INFO : Stage: mount Apr 17 23:44:09.687012 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:09.687012 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:44:09.687012 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:44:09.690398 ignition[1332]: INFO : PUT result: OK Apr 17 23:44:09.691178 ignition[1332]: INFO : mount: mount passed Apr 17 23:44:09.691791 ignition[1332]: INFO : Ignition finished successfully Apr 17 23:44:09.693579 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:44:09.698751 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:44:09.700758 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:44:10.414031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:44:10.437814 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1343) Apr 17 23:44:10.440730 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:44:10.440798 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:44:10.443509 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:44:10.448619 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:44:10.450801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:44:10.473212 ignition[1360]: INFO : Ignition 2.19.0 Apr 17 23:44:10.473212 ignition[1360]: INFO : Stage: files Apr 17 23:44:10.474748 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:10.474748 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:44:10.474748 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:44:10.475797 ignition[1360]: INFO : PUT result: OK Apr 17 23:44:10.478050 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:44:10.479025 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:44:10.479025 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:44:10.484827 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:44:10.486024 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:44:10.486024 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:44:10.485372 unknown[1360]: wrote ssh authorized keys file for user: core Apr 17 23:44:10.488702 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:44:10.489813 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:44:10.561898 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:44:10.697947 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:44:10.697947 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:44:10.699715 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 17 23:44:10.927165 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:44:11.043705 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 17 23:44:11.043705 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:11.046545 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 17 23:44:11.472738 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:44:11.844853 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 17 23:44:11.844853 ignition[1360]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:44:11.847671 ignition[1360]: INFO : files: files passed Apr 17 23:44:11.847671 ignition[1360]: INFO : Ignition finished successfully Apr 17 23:44:11.849444 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:44:11.855383 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:44:11.861900 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:44:11.864797 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:44:11.864922 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:44:11.878333 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:11.878333 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:11.880524 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:44:11.882572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:44:11.883766 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:44:11.887798 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:44:11.927769 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:44:11.927912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:44:11.929170 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:44:11.930488 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:44:11.931369 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:44:11.933899 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:44:11.960032 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:44:11.965924 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:44:11.979340 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:11.980125 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:11.981161 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:44:11.982105 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:44:11.982344 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:44:11.983446 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:44:11.984362 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:44:11.985139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:44:11.986024 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:44:11.986789 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:44:11.987561 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:44:11.988339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:44:11.989143 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:44:11.990386 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:44:11.991154 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:44:11.991882 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:44:11.992072 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:44:11.993158 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:11.994069 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:11.994744 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:44:11.994894 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:11.995546 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:44:11.995755 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:44:11.997104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:44:11.997302 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:44:11.998145 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:44:11.998312 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:44:12.004997 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:44:12.006647 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:44:12.006884 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:12.011029 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:44:12.014311 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:44:12.017032 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:12.018944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:44:12.019818 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:44:12.028551 ignition[1413]: INFO : Ignition 2.19.0 Apr 17 23:44:12.028551 ignition[1413]: INFO : Stage: umount Apr 17 23:44:12.030103 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:44:12.030103 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:44:12.030103 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:44:12.032352 ignition[1413]: INFO : PUT result: OK Apr 17 23:44:12.030344 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:44:12.030480 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:44:12.036150 ignition[1413]: INFO : umount: umount passed Apr 17 23:44:12.036751 ignition[1413]: INFO : Ignition finished successfully Apr 17 23:44:12.038305 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:44:12.038467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:44:12.041147 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:44:12.041230 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:44:12.043127 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:44:12.043230 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:44:12.043809 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:44:12.043873 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:44:12.044376 systemd[1]: Stopped target network.target - Network. Apr 17 23:44:12.045160 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:44:12.045234 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:44:12.048259 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:44:12.048715 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:44:12.048776 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:12.049227 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:44:12.049757 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:44:12.050239 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:44:12.050300 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:44:12.051746 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:44:12.051803 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:44:12.052959 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:44:12.053036 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:44:12.053528 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:44:12.053622 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:44:12.054279 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:44:12.058223 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:44:12.059651 systemd-networkd[1169]: eth0: DHCPv6 lease lost Apr 17 23:44:12.061784 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:44:12.064159 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:44:12.064322 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:44:12.067801 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:44:12.067971 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:44:12.069408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:44:12.069492 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:12.073759 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:44:12.074302 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:44:12.074396 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:44:12.075038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:44:12.075099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:12.078942 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:44:12.079030 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:12.079716 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:44:12.079785 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:12.080457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:12.090984 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:44:12.091214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:12.092940 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:44:12.093055 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:12.094907 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:44:12.094962 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:12.095687 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:44:12.095757 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:44:12.096942 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:44:12.097008 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:44:12.098198 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:44:12.098262 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:44:12.106882 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:44:12.107536 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:44:12.107643 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:12.108389 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:44:12.108452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:12.111173 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:44:12.111306 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:44:12.116427 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:44:12.116524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:44:12.248020 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:44:12.248168 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:44:12.249345 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:44:12.250135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:44:12.250213 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:44:12.257962 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:44:12.266846 systemd[1]: Switching root. Apr 17 23:44:12.296136 systemd-journald[179]: Journal stopped Apr 17 23:44:13.651169 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 17 23:44:13.651258 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:44:13.651284 kernel: SELinux: policy capability open_perms=1 Apr 17 23:44:13.651304 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:44:13.651339 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:44:13.651364 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:44:13.651384 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:44:13.651403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:44:13.651423 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:44:13.651443 kernel: audit: type=1403 audit(1776469452.572:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:44:13.651470 systemd[1]: Successfully loaded SELinux policy in 42.274ms. Apr 17 23:44:13.651509 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.763ms. Apr 17 23:44:13.651534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:44:13.651560 systemd[1]: Detected virtualization amazon. Apr 17 23:44:13.651583 systemd[1]: Detected architecture x86-64. Apr 17 23:44:13.651630 systemd[1]: Detected first boot. Apr 17 23:44:13.651651 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:44:13.651667 zram_generator::config[1457]: No configuration found. Apr 17 23:44:13.651687 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:44:13.651706 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:44:13.651724 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:44:13.651747 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:44:13.651766 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:44:13.651786 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:44:13.651808 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:44:13.651830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:44:13.651860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:44:13.651883 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:44:13.651902 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:44:13.651923 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:44:13.651947 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:44:13.651969 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:44:13.651992 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:44:13.652014 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:44:13.652033 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:44:13.652054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:44:13.652099 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:44:13.652121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:44:13.652142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:44:13.652167 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:44:13.652189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:44:13.652209 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:44:13.652230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:44:13.652256 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:44:13.652278 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:44:13.652299 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:44:13.652319 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:44:13.652344 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:44:13.652364 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:44:13.652387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:44:13.652408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:44:13.652429 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:44:13.652450 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:44:13.652472 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:44:13.652492 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:44:13.652514 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:13.652537 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:44:13.652558 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:44:13.652579 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:44:13.652628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:44:13.652649 systemd[1]: Reached target machines.target - Containers. Apr 17 23:44:13.652667 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:44:13.652685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:13.652706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:44:13.652729 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:44:13.652749 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:13.652768 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:44:13.652787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:13.652806 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:44:13.652826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:13.652847 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:44:13.652867 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:44:13.652889 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:44:13.652908 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:44:13.652928 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:44:13.652949 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:44:13.652968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:44:13.652990 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:44:13.653010 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:44:13.653029 kernel: loop: module loaded Apr 17 23:44:13.653048 kernel: fuse: init (API version 7.39) Apr 17 23:44:13.653070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:44:13.653089 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:44:13.653108 systemd[1]: Stopped verity-setup.service. Apr 17 23:44:13.653127 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:13.653147 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:44:13.653165 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:44:13.653184 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:44:13.653205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:44:13.653225 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:44:13.653282 systemd-journald[1546]: Collecting audit messages is disabled. Apr 17 23:44:13.653318 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:44:13.653338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:44:13.653359 systemd-journald[1546]: Journal started Apr 17 23:44:13.653399 systemd-journald[1546]: Runtime Journal (/run/log/journal/ec293852a2770f7878b7a13fb526ee15) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:44:13.281539 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:44:13.301381 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:44:13.302025 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:44:13.666210 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:44:13.664000 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:44:13.664206 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:44:13.665367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:13.665554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:13.667057 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:13.667495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:13.669240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:44:13.669988 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:44:13.673024 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:13.673213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:13.674227 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:44:13.676266 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:44:13.677346 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:44:13.678386 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:44:13.695094 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:44:13.702622 kernel: ACPI: bus type drm_connector registered Apr 17 23:44:13.705700 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:44:13.717766 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:44:13.720702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:44:13.720762 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:44:13.723200 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:44:13.735795 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:44:13.743896 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:44:13.745892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:13.754804 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:44:13.757150 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:44:13.757838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:44:13.760814 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:44:13.762008 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:44:13.772675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:44:13.776546 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:44:13.781820 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:44:13.786935 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:44:13.788902 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:44:13.789948 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:44:13.790713 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:44:13.791573 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:44:13.834841 systemd-journald[1546]: Time spent on flushing to /var/log/journal/ec293852a2770f7878b7a13fb526ee15 is 49.033ms for 1002 entries. Apr 17 23:44:13.834841 systemd-journald[1546]: System Journal (/var/log/journal/ec293852a2770f7878b7a13fb526ee15) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:44:13.910374 systemd-journald[1546]: Received client request to flush runtime journal. Apr 17 23:44:13.910519 kernel: loop0: detected capacity change from 0 to 140768 Apr 17 23:44:13.846091 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:44:13.852997 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:44:13.860934 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:44:13.865703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:44:13.886356 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:44:13.901826 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:44:13.926082 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:44:13.928585 udevadm[1598]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:44:13.946568 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:44:13.951544 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:44:13.949731 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:44:13.973631 kernel: loop1: detected capacity change from 0 to 61336 Apr 17 23:44:13.985939 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:44:13.995854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:44:14.048939 kernel: loop2: detected capacity change from 0 to 219192 Apr 17 23:44:14.051404 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Apr 17 23:44:14.052085 systemd-tmpfiles[1605]: ACLs are not supported, ignoring. Apr 17 23:44:14.065141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:44:14.125631 kernel: loop3: detected capacity change from 0 to 142488 Apr 17 23:44:14.206680 kernel: loop4: detected capacity change from 0 to 140768 Apr 17 23:44:14.256740 kernel: loop5: detected capacity change from 0 to 61336 Apr 17 23:44:14.290634 kernel: loop6: detected capacity change from 0 to 219192 Apr 17 23:44:14.344693 kernel: loop7: detected capacity change from 0 to 142488 Apr 17 23:44:14.396271 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:44:14.399476 (sd-merge)[1612]: Merged extensions into '/usr'. Apr 17 23:44:14.409235 systemd[1]: Reloading requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:44:14.409412 systemd[1]: Reloading... Apr 17 23:44:14.548625 zram_generator::config[1634]: No configuration found. Apr 17 23:44:14.781517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:14.834955 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:44:14.873262 systemd[1]: Reloading finished in 463 ms. Apr 17 23:44:14.899594 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:44:14.906525 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:44:14.913862 systemd[1]: Starting ensure-sysext.service... Apr 17 23:44:14.924037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:44:14.942208 systemd[1]: Reloading requested from client PID 1690 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:44:14.942374 systemd[1]: Reloading... Apr 17 23:44:14.960679 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:44:14.961239 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:44:14.962755 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:44:14.963219 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Apr 17 23:44:14.963325 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Apr 17 23:44:14.968337 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:44:14.968352 systemd-tmpfiles[1691]: Skipping /boot Apr 17 23:44:14.984905 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:44:14.984921 systemd-tmpfiles[1691]: Skipping /boot Apr 17 23:44:15.086683 zram_generator::config[1730]: No configuration found. Apr 17 23:44:15.192781 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:15.247016 systemd[1]: Reloading finished in 303 ms. Apr 17 23:44:15.265413 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:44:15.278409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:44:15.299337 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:15.304970 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:44:15.307941 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:44:15.313830 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:44:15.321093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:44:15.331956 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:44:15.338712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:15.339030 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:15.346915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:15.354978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:44:15.362559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:44:15.363719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:15.374908 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:44:15.376121 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:15.378651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:15.379881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:15.387954 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:15.388231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:15.396984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:44:15.398132 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:15.398331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:15.412997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:15.413437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:44:15.422186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:44:15.423815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:44:15.425395 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:44:15.426908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:44:15.430151 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:44:15.430831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:44:15.441917 systemd[1]: Finished ensure-sysext.service. Apr 17 23:44:15.444009 systemd-udevd[1778]: Using default interface naming scheme 'v255'. Apr 17 23:44:15.448274 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:44:15.449288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:44:15.449471 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:44:15.461459 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:44:15.471041 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:44:15.471643 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:44:15.478872 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:44:15.483703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:44:15.484891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:44:15.487493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:44:15.519031 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:44:15.525949 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:44:15.528370 augenrules[1811]: No rules Apr 17 23:44:15.530026 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:15.555756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:44:15.566076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:44:15.573098 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:44:15.583030 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:44:15.587161 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:44:15.619219 systemd-resolved[1777]: Positive Trust Anchors: Apr 17 23:44:15.619234 systemd-resolved[1777]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:44:15.619298 systemd-resolved[1777]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:44:15.636313 systemd-resolved[1777]: Defaulting to hostname 'linux'. Apr 17 23:44:15.642401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:44:15.643783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:44:15.694161 systemd-networkd[1821]: lo: Link UP Apr 17 23:44:15.694173 systemd-networkd[1821]: lo: Gained carrier Apr 17 23:44:15.695078 systemd-networkd[1821]: Enumeration completed Apr 17 23:44:15.695206 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:44:15.696780 systemd[1]: Reached target network.target - Network. Apr 17 23:44:15.704860 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:44:15.707881 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:44:15.714196 (udev-worker)[1833]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:44:15.788092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:44:15.788198 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:44:15.796638 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1829) Apr 17 23:44:15.799581 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:15.800064 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:44:15.800653 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:44:15.802627 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 17 23:44:15.804758 systemd-networkd[1821]: eth0: Link UP Apr 17 23:44:15.804995 systemd-networkd[1821]: eth0: Gained carrier Apr 17 23:44:15.805029 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:44:15.806631 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:44:15.814703 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.19.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:44:15.843633 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Apr 17 23:44:15.973129 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:44:16.003619 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:44:16.015571 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:44:16.023851 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:44:16.037142 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:44:16.044449 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:44:16.047917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:44:16.070127 lvm[1937]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:44:16.097925 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:44:16.098783 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:44:16.105912 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:44:16.115063 lvm[1942]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:44:16.122760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:44:16.123799 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:44:16.124652 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:44:16.125359 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:44:16.126394 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:44:16.127162 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:44:16.127830 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:44:16.128585 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:44:16.128643 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:44:16.129223 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:44:16.130973 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:44:16.134117 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:44:16.139240 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:44:16.140294 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:44:16.140873 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:44:16.141487 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:44:16.142147 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:44:16.142177 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:44:16.143363 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:44:16.147794 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:44:16.151349 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:44:16.156764 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:44:16.163800 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:44:16.165133 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:44:16.167725 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:44:16.185935 jq[1950]: false Apr 17 23:44:16.187182 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:44:16.190747 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:44:16.224711 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:44:16.235833 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:44:16.241909 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:44:16.250288 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:44:16.251517 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:44:16.254199 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:44:16.262802 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:44:16.274749 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:44:16.278683 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:44:16.285281 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:44:16.286145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:44:16.286557 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:44:16.286833 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:44:16.304547 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:44:16.312709 extend-filesystems[1951]: Found loop4 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found loop5 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found loop6 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found loop7 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p1 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p2 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p3 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found usr Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p4 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p6 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p7 Apr 17 23:44:16.312709 extend-filesystems[1951]: Found nvme0n1p9 Apr 17 23:44:16.312709 extend-filesystems[1951]: Checking size of /dev/nvme0n1p9 Apr 17 23:44:16.374566 tar[1972]: linux-amd64/LICENSE Apr 17 23:44:16.374566 tar[1972]: linux-amd64/helm Apr 17 23:44:16.306084 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:44:16.377472 jq[1969]: true Apr 17 23:44:16.377580 coreos-metadata[1948]: Apr 17 23:44:16.364 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:44:16.377580 coreos-metadata[1948]: Apr 17 23:44:16.376 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:44:16.380315 update_engine[1968]: I20260417 23:44:16.349493 1968 main.cc:92] Flatcar Update Engine starting Apr 17 23:44:16.380315 update_engine[1968]: I20260417 23:44:16.371711 1968 update_check_scheduler.cc:74] Next update check in 4m47s Apr 17 23:44:16.359956 dbus-daemon[1949]: [system] SELinux support is enabled Apr 17 23:44:16.343781 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:44:16.368205 dbus-daemon[1949]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:44:16.360417 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:44:16.377776 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:44:16.374094 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:44:16.374130 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:44:16.375867 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:44:16.375897 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:44:16.376760 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:44:16.384878 coreos-metadata[1948]: Apr 17 23:44:16.384 INFO Fetch successful Apr 17 23:44:16.384966 coreos-metadata[1948]: Apr 17 23:44:16.384 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:44:16.386637 coreos-metadata[1948]: Apr 17 23:44:16.385 INFO Fetch successful Apr 17 23:44:16.386637 coreos-metadata[1948]: Apr 17 23:44:16.385 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: ---------------------------------------------------- Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: corporation. Support and training for ntp-4 are Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: available at https://www.nwtime.org/support Apr 17 23:44:16.386737 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: ---------------------------------------------------- Apr 17 23:44:16.386103 ntpd[1953]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:44:16.386128 ntpd[1953]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:44:16.386140 ntpd[1953]: ---------------------------------------------------- Apr 17 23:44:16.386150 ntpd[1953]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:44:16.386160 ntpd[1953]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:44:16.386170 ntpd[1953]: corporation. Support and training for ntp-4 are Apr 17 23:44:16.386180 ntpd[1953]: available at https://www.nwtime.org/support Apr 17 23:44:16.386189 ntpd[1953]: ---------------------------------------------------- Apr 17 23:44:16.400630 coreos-metadata[1948]: Apr 17 23:44:16.397 INFO Fetch successful Apr 17 23:44:16.400630 coreos-metadata[1948]: Apr 17 23:44:16.397 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:44:16.400630 coreos-metadata[1948]: Apr 17 23:44:16.398 INFO Fetch successful Apr 17 23:44:16.400630 coreos-metadata[1948]: Apr 17 23:44:16.398 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:44:16.400121 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:44:16.401148 ntpd[1953]: proto: precision = 0.074 usec (-24) Apr 17 23:44:16.403157 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: proto: precision = 0.074 usec (-24) Apr 17 23:44:16.405627 coreos-metadata[1948]: Apr 17 23:44:16.404 INFO Fetch failed with 404: resource not found Apr 17 23:44:16.405627 coreos-metadata[1948]: Apr 17 23:44:16.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:44:16.405763 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: basedate set to 2026-04-05 Apr 17 23:44:16.405763 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: gps base set to 2026-04-05 (week 2413) Apr 17 23:44:16.404542 ntpd[1953]: basedate set to 2026-04-05 Apr 17 23:44:16.404562 ntpd[1953]: gps base set to 2026-04-05 (week 2413) Apr 17 23:44:16.414273 coreos-metadata[1948]: Apr 17 23:44:16.410 INFO Fetch successful Apr 17 23:44:16.414273 coreos-metadata[1948]: Apr 17 23:44:16.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:44:16.412949 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:44:16.418077 coreos-metadata[1948]: Apr 17 23:44:16.418 INFO Fetch successful Apr 17 23:44:16.418077 coreos-metadata[1948]: Apr 17 23:44:16.418 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:44:16.420627 coreos-metadata[1948]: Apr 17 23:44:16.420 INFO Fetch successful Apr 17 23:44:16.420627 coreos-metadata[1948]: Apr 17 23:44:16.420 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:44:16.421251 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Listen normally on 3 eth0 172.31.19.162:123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Listen normally on 4 lo [::1]:123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: bind(21) AF_INET6 fe80::47b:1ff:fec6:ed9%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: unable to create socket on eth0 (5) for fe80::47b:1ff:fec6:ed9%2#123 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: failed to init interface for address fe80::47b:1ff:fec6:ed9%2 Apr 17 23:44:16.421992 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Apr 17 23:44:16.421312 ntpd[1953]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:44:16.421525 ntpd[1953]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:44:16.421573 ntpd[1953]: Listen normally on 3 eth0 172.31.19.162:123 Apr 17 23:44:16.421659 ntpd[1953]: Listen normally on 4 lo [::1]:123 Apr 17 23:44:16.421707 ntpd[1953]: bind(21) AF_INET6 fe80::47b:1ff:fec6:ed9%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:44:16.421729 ntpd[1953]: unable to create socket on eth0 (5) for fe80::47b:1ff:fec6:ed9%2#123 Apr 17 23:44:16.421747 ntpd[1953]: failed to init interface for address fe80::47b:1ff:fec6:ed9%2 Apr 17 23:44:16.421782 ntpd[1953]: Listening on routing socket on fd #21 for interface updates Apr 17 23:44:16.429811 jq[1986]: true Apr 17 23:44:16.424121 (ntainerd)[1988]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:44:16.430296 coreos-metadata[1948]: Apr 17 23:44:16.423 INFO Fetch successful Apr 17 23:44:16.430296 coreos-metadata[1948]: Apr 17 23:44:16.423 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:44:16.436631 coreos-metadata[1948]: Apr 17 23:44:16.430 INFO Fetch successful Apr 17 23:44:16.439956 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:16.441743 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:16.441743 ntpd[1953]: 17 Apr 23:44:16 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:16.439994 ntpd[1953]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:44:16.444340 extend-filesystems[1951]: Resized partition /dev/nvme0n1p9 Apr 17 23:44:16.450645 extend-filesystems[2002]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:44:16.468625 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:44:16.553175 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:44:16.554460 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:44:16.556329 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:44:16.672968 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1832) Apr 17 23:44:16.689065 systemd-logind[1967]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:44:16.690453 systemd-logind[1967]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 17 23:44:16.690571 systemd-logind[1967]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:44:16.692271 systemd-logind[1967]: New seat seat0. Apr 17 23:44:16.694197 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:44:16.713307 bash[2031]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:44:16.720442 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:44:16.734934 systemd[1]: Starting sshkeys.service... Apr 17 23:44:16.747873 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:44:16.748260 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:44:16.753067 dbus-daemon[1949]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1994 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:44:16.771073 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:44:16.778346 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:44:16.789560 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:44:16.789046 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:44:16.808522 extend-filesystems[2002]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:44:16.808522 extend-filesystems[2002]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:44:16.808522 extend-filesystems[2002]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:44:16.813223 extend-filesystems[1951]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:44:16.809453 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:44:16.810485 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:44:16.860454 polkitd[2037]: Started polkitd version 121 Apr 17 23:44:16.884005 polkitd[2037]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:44:16.894565 polkitd[2037]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:44:16.900439 polkitd[2037]: Finished loading, compiling and executing 2 rules Apr 17 23:44:16.901294 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:44:16.901069 dbus-daemon[1949]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:44:16.904064 polkitd[2037]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:44:16.981811 systemd-resolved[1777]: System hostname changed to 'ip-172-31-19-162'. Apr 17 23:44:16.981812 systemd-hostnamed[1994]: Hostname set to (transient) Apr 17 23:44:16.995439 sshd_keygen[1997]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:44:16.995792 coreos-metadata[2038]: Apr 17 23:44:16.995 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:44:16.996934 coreos-metadata[2038]: Apr 17 23:44:16.996 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:44:16.998560 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:44:17.000375 coreos-metadata[2038]: Apr 17 23:44:17.000 INFO Fetch successful Apr 17 23:44:17.000447 coreos-metadata[2038]: Apr 17 23:44:17.000 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:44:17.002505 coreos-metadata[2038]: Apr 17 23:44:17.002 INFO Fetch successful Apr 17 23:44:17.007401 unknown[2038]: wrote ssh authorized keys file for user: core Apr 17 23:44:17.084668 update-ssh-keys[2112]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:44:17.086298 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:44:17.090038 systemd[1]: Finished sshkeys.service. Apr 17 23:44:17.128130 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:44:17.136353 systemd-networkd[1821]: eth0: Gained IPv6LL Apr 17 23:44:17.138027 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:44:17.146980 systemd[1]: Started sshd@0-172.31.19.162:22-20.229.252.112:59896.service - OpenSSH per-connection server daemon (20.229.252.112:59896). Apr 17 23:44:17.150667 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:44:17.154733 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:44:17.174437 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:44:17.187924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:17.195932 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:44:17.209142 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:44:17.209401 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:44:17.244849 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:44:17.334037 amazon-ssm-agent[2151]: Initializing new seelog logger Apr 17 23:44:17.336020 amazon-ssm-agent[2151]: New Seelog Logger Creation Complete Apr 17 23:44:17.336238 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.336314 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.336960 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 processing appconfig overrides Apr 17 23:44:17.337522 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.337648 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.337805 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 processing appconfig overrides Apr 17 23:44:17.338190 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.339494 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.339693 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 processing appconfig overrides Apr 17 23:44:17.340514 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO Proxy environment variables: Apr 17 23:44:17.354146 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:44:17.356624 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.356624 amazon-ssm-agent[2151]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:44:17.356624 amazon-ssm-agent[2151]: 2026/04/17 23:44:17 processing appconfig overrides Apr 17 23:44:17.356723 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:44:17.371109 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:44:17.379793 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:44:17.381573 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:44:17.414042 containerd[1988]: time="2026-04-17T23:44:17.413944421Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:44:17.440383 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO https_proxy: Apr 17 23:44:17.500407 containerd[1988]: time="2026-04-17T23:44:17.500316447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506037984Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506114973Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506143164Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506348652Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506373461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506446462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506627 containerd[1988]: time="2026-04-17T23:44:17.506464627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506961 containerd[1988]: time="2026-04-17T23:44:17.506717060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506961 containerd[1988]: time="2026-04-17T23:44:17.506740177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506961 containerd[1988]: time="2026-04-17T23:44:17.506761017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506961 containerd[1988]: time="2026-04-17T23:44:17.506777617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.506961 containerd[1988]: time="2026-04-17T23:44:17.506883886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.507149 containerd[1988]: time="2026-04-17T23:44:17.507134548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:44:17.509104 containerd[1988]: time="2026-04-17T23:44:17.508796866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:44:17.509104 containerd[1988]: time="2026-04-17T23:44:17.508842415Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:44:17.509104 containerd[1988]: time="2026-04-17T23:44:17.508967280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:44:17.509104 containerd[1988]: time="2026-04-17T23:44:17.509027285Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:44:17.517617 containerd[1988]: time="2026-04-17T23:44:17.517233553Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:44:17.517617 containerd[1988]: time="2026-04-17T23:44:17.517349460Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:44:17.517617 containerd[1988]: time="2026-04-17T23:44:17.517426326Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:44:17.517617 containerd[1988]: time="2026-04-17T23:44:17.517467851Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:44:17.517617 containerd[1988]: time="2026-04-17T23:44:17.517494034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:44:17.519373 containerd[1988]: time="2026-04-17T23:44:17.518483459Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:44:17.522150 containerd[1988]: time="2026-04-17T23:44:17.521861851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:44:17.522150 containerd[1988]: time="2026-04-17T23:44:17.522092678Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:44:17.522150 containerd[1988]: time="2026-04-17T23:44:17.522116719Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:44:17.522647 containerd[1988]: time="2026-04-17T23:44:17.522136442Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:44:17.522813 containerd[1988]: time="2026-04-17T23:44:17.522720529Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.522813 containerd[1988]: time="2026-04-17T23:44:17.522745820Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.523185 containerd[1988]: time="2026-04-17T23:44:17.522905716Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.523185 containerd[1988]: time="2026-04-17T23:44:17.522934059Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.523185 containerd[1988]: time="2026-04-17T23:44:17.522956078Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523328504Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523352512Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523372716Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523417458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523438673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523525100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523550138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523567621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523611525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.524660 containerd[1988]: time="2026-04-17T23:44:17.523631743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.523653032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525109452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525136447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525177932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525197886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525218256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525256415Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525292042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525324176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525340693Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525434269Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525537397Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:44:17.525611 containerd[1988]: time="2026-04-17T23:44:17.525555714Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:44:17.528966 containerd[1988]: time="2026-04-17T23:44:17.525574637Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:44:17.528966 containerd[1988]: time="2026-04-17T23:44:17.526153210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.528966 containerd[1988]: time="2026-04-17T23:44:17.527848172Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:44:17.528966 containerd[1988]: time="2026-04-17T23:44:17.527876910Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:44:17.528966 containerd[1988]: time="2026-04-17T23:44:17.527909358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:44:17.530382 containerd[1988]: time="2026-04-17T23:44:17.530293576Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:44:17.531307 containerd[1988]: time="2026-04-17T23:44:17.530656535Z" level=info msg="Connect containerd service" Apr 17 23:44:17.531307 containerd[1988]: time="2026-04-17T23:44:17.530729801Z" level=info msg="using legacy CRI server" Apr 17 23:44:17.531307 containerd[1988]: time="2026-04-17T23:44:17.530743274Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:44:17.531307 containerd[1988]: time="2026-04-17T23:44:17.530902902Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:44:17.532895 containerd[1988]: time="2026-04-17T23:44:17.532866092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:44:17.536048 containerd[1988]: time="2026-04-17T23:44:17.535526474Z" level=info msg="Start subscribing containerd event" Apr 17 23:44:17.536048 containerd[1988]: time="2026-04-17T23:44:17.535608398Z" level=info msg="Start recovering state" Apr 17 23:44:17.536048 containerd[1988]: time="2026-04-17T23:44:17.535695128Z" level=info msg="Start event monitor" Apr 17 23:44:17.536048 containerd[1988]: time="2026-04-17T23:44:17.535717508Z" level=info msg="Start snapshots syncer" Apr 17 23:44:17.536048 containerd[1988]: time="2026-04-17T23:44:17.535732250Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:44:17.536048 containerd[1988]: time="2026-04-17T23:44:17.535744491Z" level=info msg="Start streaming server" Apr 17 23:44:17.536881 containerd[1988]: time="2026-04-17T23:44:17.536859399Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:44:17.540130 containerd[1988]: time="2026-04-17T23:44:17.537925903Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:44:17.540130 containerd[1988]: time="2026-04-17T23:44:17.539211147Z" level=info msg="containerd successfully booted in 0.127216s" Apr 17 23:44:17.540235 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO http_proxy: Apr 17 23:44:17.538123 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:44:17.637897 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO no_proxy: Apr 17 23:44:17.736523 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:44:17.835201 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:44:17.935031 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO Agent will take identity from EC2 Apr 17 23:44:17.971987 tar[1972]: linux-amd64/README.md Apr 17 23:44:17.980172 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:44:17.980172 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:44:17.980172 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:44:17.980172 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:44:17.980172 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [Registrar] Starting registrar module Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:44:17.980427 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:44:17.985731 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:44:18.034124 amazon-ssm-agent[2151]: 2026-04-17 23:44:17 INFO [CredentialRefresher] Next credential rotation will be in 32.13330622901667 minutes Apr 17 23:44:18.258783 sshd[2149]: Accepted publickey for core from 20.229.252.112 port 59896 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:18.262549 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:18.277753 systemd-logind[1967]: New session 1 of user core. Apr 17 23:44:18.279358 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:44:18.286069 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:44:18.305151 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:44:18.314079 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:44:18.324469 (systemd)[2197]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:44:18.467828 systemd[2197]: Queued start job for default target default.target. Apr 17 23:44:18.473338 systemd[2197]: Created slice app.slice - User Application Slice. Apr 17 23:44:18.473386 systemd[2197]: Reached target paths.target - Paths. Apr 17 23:44:18.473409 systemd[2197]: Reached target timers.target - Timers. Apr 17 23:44:18.475316 systemd[2197]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:44:18.498554 systemd[2197]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:44:18.498760 systemd[2197]: Reached target sockets.target - Sockets. Apr 17 23:44:18.498784 systemd[2197]: Reached target basic.target - Basic System. Apr 17 23:44:18.498848 systemd[2197]: Reached target default.target - Main User Target. Apr 17 23:44:18.498888 systemd[2197]: Startup finished in 166ms. Apr 17 23:44:18.498998 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:44:18.506025 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:44:18.952896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:18.954530 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:44:18.956229 systemd[1]: Startup finished in 605ms (kernel) + 27.874s (initrd) + 6.423s (userspace) = 34.903s. Apr 17 23:44:18.966514 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:19.000040 amazon-ssm-agent[2151]: 2026-04-17 23:44:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:44:19.102010 amazon-ssm-agent[2151]: 2026-04-17 23:44:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2218) started Apr 17 23:44:19.203039 amazon-ssm-agent[2151]: 2026-04-17 23:44:19 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:44:19.216013 systemd[1]: Started sshd@1-172.31.19.162:22-20.229.252.112:59904.service - OpenSSH per-connection server daemon (20.229.252.112:59904). Apr 17 23:44:19.386654 ntpd[1953]: Listen normally on 6 eth0 [fe80::47b:1ff:fec6:ed9%2]:123 Apr 17 23:44:19.387145 ntpd[1953]: 17 Apr 23:44:19 ntpd[1953]: Listen normally on 6 eth0 [fe80::47b:1ff:fec6:ed9%2]:123 Apr 17 23:44:19.770468 kubelet[2212]: E0417 23:44:19.770370 2212 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:19.772955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:19.773161 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:19.773816 systemd[1]: kubelet.service: Consumed 1.045s CPU time. Apr 17 23:44:20.199970 sshd[2235]: Accepted publickey for core from 20.229.252.112 port 59904 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:20.201509 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:20.207146 systemd-logind[1967]: New session 2 of user core. Apr 17 23:44:20.215059 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:44:20.880701 sshd[2235]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:20.884237 systemd[1]: sshd@1-172.31.19.162:22-20.229.252.112:59904.service: Deactivated successfully. Apr 17 23:44:20.886547 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:44:20.888175 systemd-logind[1967]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:44:20.889371 systemd-logind[1967]: Removed session 2. Apr 17 23:44:21.066973 systemd[1]: Started sshd@2-172.31.19.162:22-20.229.252.112:59912.service - OpenSSH per-connection server daemon (20.229.252.112:59912). Apr 17 23:44:22.082199 sshd[2243]: Accepted publickey for core from 20.229.252.112 port 59912 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:22.083118 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:22.089936 systemd-logind[1967]: New session 3 of user core. Apr 17 23:44:22.099862 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:44:22.783988 sshd[2243]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:22.788290 systemd[1]: sshd@2-172.31.19.162:22-20.229.252.112:59912.service: Deactivated successfully. Apr 17 23:44:22.790377 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:44:22.791262 systemd-logind[1967]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:44:22.792525 systemd-logind[1967]: Removed session 3. Apr 17 23:44:22.954992 systemd[1]: Started sshd@3-172.31.19.162:22-20.229.252.112:59920.service - OpenSSH per-connection server daemon (20.229.252.112:59920). Apr 17 23:44:24.969072 systemd-resolved[1777]: Clock change detected. Flushing caches. Apr 17 23:44:25.509789 sshd[2250]: Accepted publickey for core from 20.229.252.112 port 59920 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:25.511310 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:25.516875 systemd-logind[1967]: New session 4 of user core. Apr 17 23:44:25.522977 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:44:26.189449 sshd[2250]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:26.192831 systemd[1]: sshd@3-172.31.19.162:22-20.229.252.112:59920.service: Deactivated successfully. Apr 17 23:44:26.194968 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:44:26.196487 systemd-logind[1967]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:44:26.197911 systemd-logind[1967]: Removed session 4. Apr 17 23:44:26.362078 systemd[1]: Started sshd@4-172.31.19.162:22-20.229.252.112:53278.service - OpenSSH per-connection server daemon (20.229.252.112:53278). Apr 17 23:44:27.335291 sshd[2257]: Accepted publickey for core from 20.229.252.112 port 53278 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:27.336924 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:27.342020 systemd-logind[1967]: New session 5 of user core. Apr 17 23:44:27.348136 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:44:27.870589 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:44:27.871093 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:27.884522 sudo[2260]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:28.044183 sshd[2257]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:28.048651 systemd-logind[1967]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:44:28.049923 systemd[1]: sshd@4-172.31.19.162:22-20.229.252.112:53278.service: Deactivated successfully. Apr 17 23:44:28.052030 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:44:28.053018 systemd-logind[1967]: Removed session 5. Apr 17 23:44:28.220142 systemd[1]: Started sshd@5-172.31.19.162:22-20.229.252.112:53282.service - OpenSSH per-connection server daemon (20.229.252.112:53282). Apr 17 23:44:29.193606 sshd[2265]: Accepted publickey for core from 20.229.252.112 port 53282 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:29.195511 sshd[2265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:29.200913 systemd-logind[1967]: New session 6 of user core. Apr 17 23:44:29.211004 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:44:29.716628 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:44:29.717051 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:29.721080 sudo[2269]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:29.726621 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:44:29.727174 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:29.748291 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:29.750722 auditctl[2272]: No rules Apr 17 23:44:29.751935 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:44:29.752220 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:29.758139 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:44:29.786466 augenrules[2290]: No rules Apr 17 23:44:29.788065 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:44:29.789277 sudo[2268]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:29.949040 sshd[2265]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:29.952550 systemd[1]: sshd@5-172.31.19.162:22-20.229.252.112:53282.service: Deactivated successfully. Apr 17 23:44:29.954486 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:44:29.956059 systemd-logind[1967]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:44:29.957202 systemd-logind[1967]: Removed session 6. Apr 17 23:44:30.121065 systemd[1]: Started sshd@6-172.31.19.162:22-20.229.252.112:53284.service - OpenSSH per-connection server daemon (20.229.252.112:53284). Apr 17 23:44:31.090976 sshd[2298]: Accepted publickey for core from 20.229.252.112 port 53284 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:44:31.092688 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:44:31.098403 systemd-logind[1967]: New session 7 of user core. Apr 17 23:44:31.104132 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:44:31.478935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:44:31.496139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:31.612623 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:44:31.613034 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:44:31.755917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:31.757741 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:31.820733 kubelet[2318]: E0417 23:44:31.820106 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:31.827756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:31.827961 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:32.066087 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:44:32.069094 (dockerd)[2332]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:44:32.450013 dockerd[2332]: time="2026-04-17T23:44:32.449878928Z" level=info msg="Starting up" Apr 17 23:44:32.546439 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2994276045-merged.mount: Deactivated successfully. Apr 17 23:44:32.579325 dockerd[2332]: time="2026-04-17T23:44:32.579274056Z" level=info msg="Loading containers: start." Apr 17 23:44:32.697744 kernel: Initializing XFRM netlink socket Apr 17 23:44:32.726078 (udev-worker)[2353]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:44:32.785122 systemd-networkd[1821]: docker0: Link UP Apr 17 23:44:32.799434 dockerd[2332]: time="2026-04-17T23:44:32.799376742Z" level=info msg="Loading containers: done." Apr 17 23:44:32.818812 dockerd[2332]: time="2026-04-17T23:44:32.818749824Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:44:32.819035 dockerd[2332]: time="2026-04-17T23:44:32.818883221Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:44:32.819091 dockerd[2332]: time="2026-04-17T23:44:32.819028895Z" level=info msg="Daemon has completed initialization" Apr 17 23:44:32.852548 dockerd[2332]: time="2026-04-17T23:44:32.851918685Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:44:32.852157 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:44:33.541493 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3935578792-merged.mount: Deactivated successfully. Apr 17 23:44:33.558885 containerd[1988]: time="2026-04-17T23:44:33.558839801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 17 23:44:34.115229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648402128.mount: Deactivated successfully. Apr 17 23:44:36.082268 containerd[1988]: time="2026-04-17T23:44:36.082208777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:36.083631 containerd[1988]: time="2026-04-17T23:44:36.083583204Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100514" Apr 17 23:44:36.084728 containerd[1988]: time="2026-04-17T23:44:36.084602780Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:36.087471 containerd[1988]: time="2026-04-17T23:44:36.087401619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:36.089234 containerd[1988]: time="2026-04-17T23:44:36.088851116Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 2.529969194s" Apr 17 23:44:36.089234 containerd[1988]: time="2026-04-17T23:44:36.088896953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 17 23:44:36.089975 containerd[1988]: time="2026-04-17T23:44:36.089939962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 17 23:44:37.788619 containerd[1988]: time="2026-04-17T23:44:37.788564837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:37.796664 containerd[1988]: time="2026-04-17T23:44:37.796588245Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252738" Apr 17 23:44:37.802108 containerd[1988]: time="2026-04-17T23:44:37.801627049Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:37.804862 containerd[1988]: time="2026-04-17T23:44:37.804782802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:37.808266 containerd[1988]: time="2026-04-17T23:44:37.806507853Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 1.716424705s" Apr 17 23:44:37.808266 containerd[1988]: time="2026-04-17T23:44:37.806564581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 17 23:44:37.809178 containerd[1988]: time="2026-04-17T23:44:37.809146835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 17 23:44:39.136046 containerd[1988]: time="2026-04-17T23:44:39.135992502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:39.137411 containerd[1988]: time="2026-04-17T23:44:39.137359950Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810891" Apr 17 23:44:39.138866 containerd[1988]: time="2026-04-17T23:44:39.138804177Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:39.142168 containerd[1988]: time="2026-04-17T23:44:39.142093006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:39.144039 containerd[1988]: time="2026-04-17T23:44:39.143272824Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.334082383s" Apr 17 23:44:39.144039 containerd[1988]: time="2026-04-17T23:44:39.143315706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 17 23:44:39.144516 containerd[1988]: time="2026-04-17T23:44:39.144488249Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 17 23:44:40.262208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602989476.mount: Deactivated successfully. Apr 17 23:44:40.684228 containerd[1988]: time="2026-04-17T23:44:40.684063881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:40.685449 containerd[1988]: time="2026-04-17T23:44:40.685399431Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972954" Apr 17 23:44:40.686480 containerd[1988]: time="2026-04-17T23:44:40.686416389Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:40.688869 containerd[1988]: time="2026-04-17T23:44:40.688788370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:40.689822 containerd[1988]: time="2026-04-17T23:44:40.689517288Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 1.544994713s" Apr 17 23:44:40.689822 containerd[1988]: time="2026-04-17T23:44:40.689560757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 17 23:44:40.690437 containerd[1988]: time="2026-04-17T23:44:40.690092144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 17 23:44:41.184606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3364045004.mount: Deactivated successfully. Apr 17 23:44:41.979231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:44:41.987098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:42.232996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:42.244427 (kubelet)[2602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:44:42.318719 kubelet[2602]: E0417 23:44:42.317624 2602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:44:42.321685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:44:42.321908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:44:42.728494 containerd[1988]: time="2026-04-17T23:44:42.728027252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:42.730563 containerd[1988]: time="2026-04-17T23:44:42.730274803Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Apr 17 23:44:42.733262 containerd[1988]: time="2026-04-17T23:44:42.732799794Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:42.737303 containerd[1988]: time="2026-04-17T23:44:42.737251995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:42.738967 containerd[1988]: time="2026-04-17T23:44:42.738919911Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.048793582s" Apr 17 23:44:42.738967 containerd[1988]: time="2026-04-17T23:44:42.738961695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 17 23:44:42.740040 containerd[1988]: time="2026-04-17T23:44:42.740005438Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:44:43.236334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593065847.mount: Deactivated successfully. Apr 17 23:44:43.247204 containerd[1988]: time="2026-04-17T23:44:43.247146532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:43.249117 containerd[1988]: time="2026-04-17T23:44:43.249044547Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Apr 17 23:44:43.251415 containerd[1988]: time="2026-04-17T23:44:43.251349546Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:43.255015 containerd[1988]: time="2026-04-17T23:44:43.254973978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:43.256199 containerd[1988]: time="2026-04-17T23:44:43.256020065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 515.973273ms" Apr 17 23:44:43.256199 containerd[1988]: time="2026-04-17T23:44:43.256061842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:44:43.256712 containerd[1988]: time="2026-04-17T23:44:43.256516843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 17 23:44:43.798468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2915447331.mount: Deactivated successfully. Apr 17 23:44:45.061145 containerd[1988]: time="2026-04-17T23:44:45.061080387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:45.063347 containerd[1988]: time="2026-04-17T23:44:45.063273826Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874817" Apr 17 23:44:45.065971 containerd[1988]: time="2026-04-17T23:44:45.065888424Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:45.071509 containerd[1988]: time="2026-04-17T23:44:45.071433635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:44:45.073230 containerd[1988]: time="2026-04-17T23:44:45.073018409Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.816465561s" Apr 17 23:44:45.073230 containerd[1988]: time="2026-04-17T23:44:45.073067432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 17 23:44:48.599160 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:44:48.695817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:48.702094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:48.741679 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-7.scope)... Apr 17 23:44:48.741713 systemd[1]: Reloading... Apr 17 23:44:48.866773 zram_generator::config[2744]: No configuration found. Apr 17 23:44:49.046670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:49.134945 systemd[1]: Reloading finished in 392 ms. Apr 17 23:44:49.206145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:49.207328 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:44:49.207658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:49.212097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:49.424212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:49.435406 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:44:49.501252 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:44:49.501252 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:44:49.501728 kubelet[2813]: I0417 23:44:49.501341 2813 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:44:50.645410 kubelet[2813]: I0417 23:44:50.645363 2813 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:44:50.645410 kubelet[2813]: I0417 23:44:50.645396 2813 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:44:50.646601 kubelet[2813]: I0417 23:44:50.646555 2813 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:44:50.646601 kubelet[2813]: I0417 23:44:50.646596 2813 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:44:50.647400 kubelet[2813]: I0417 23:44:50.647365 2813 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:44:50.659170 kubelet[2813]: I0417 23:44:50.658591 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:44:50.659650 kubelet[2813]: E0417 23:44:50.659616 2813 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.19.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:44:50.662529 kubelet[2813]: E0417 23:44:50.662489 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:44:50.662648 kubelet[2813]: I0417 23:44:50.662557 2813 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:44:50.665109 kubelet[2813]: I0417 23:44:50.665087 2813 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:44:50.665989 kubelet[2813]: I0417 23:44:50.665952 2813 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:44:50.666193 kubelet[2813]: I0417 23:44:50.665987 2813 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:44:50.666193 kubelet[2813]: I0417 23:44:50.666192 2813 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:44:50.666541 kubelet[2813]: I0417 23:44:50.666207 2813 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:44:50.666541 kubelet[2813]: I0417 23:44:50.666324 2813 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:44:50.669868 kubelet[2813]: I0417 23:44:50.669843 2813 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:50.670066 kubelet[2813]: I0417 23:44:50.670046 2813 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:44:50.670066 kubelet[2813]: I0417 23:44:50.670067 2813 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:44:50.670165 kubelet[2813]: I0417 23:44:50.670098 2813 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:44:50.670165 kubelet[2813]: I0417 23:44:50.670118 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:44:50.673031 kubelet[2813]: E0417 23:44:50.672968 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-162&limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:44:50.674477 kubelet[2813]: I0417 23:44:50.673858 2813 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:44:50.674596 kubelet[2813]: I0417 23:44:50.674569 2813 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:44:50.674647 kubelet[2813]: I0417 23:44:50.674617 2813 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:44:50.674690 kubelet[2813]: W0417 23:44:50.674680 2813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:44:50.678227 kubelet[2813]: I0417 23:44:50.677443 2813 server.go:1262] "Started kubelet" Apr 17 23:44:50.678227 kubelet[2813]: E0417 23:44:50.677645 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:44:50.681166 kubelet[2813]: I0417 23:44:50.680532 2813 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:44:50.681620 kubelet[2813]: I0417 23:44:50.681587 2813 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:44:50.683311 kubelet[2813]: I0417 23:44:50.683080 2813 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:44:50.683311 kubelet[2813]: I0417 23:44:50.683151 2813 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:44:50.684528 kubelet[2813]: I0417 23:44:50.684105 2813 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:44:50.687724 kubelet[2813]: E0417 23:44:50.685010 2813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.162:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-162.18a74996f2257622 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-162,UID:ip-172-31-19-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-162,},FirstTimestamp:2026-04-17 23:44:50.67741341 +0000 UTC m=+1.215860471,LastTimestamp:2026-04-17 23:44:50.67741341 +0000 UTC m=+1.215860471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-162,}" Apr 17 23:44:50.689980 kubelet[2813]: I0417 23:44:50.689955 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:44:50.697625 kubelet[2813]: I0417 23:44:50.697591 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:44:50.698830 kubelet[2813]: I0417 23:44:50.698809 2813 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:44:50.699185 kubelet[2813]: E0417 23:44:50.699165 2813 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-19-162\" not found" Apr 17 23:44:50.700321 kubelet[2813]: E0417 23:44:50.700275 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-162?timeout=10s\": dial tcp 172.31.19.162:6443: connect: connection refused" interval="200ms" Apr 17 23:44:50.700417 kubelet[2813]: I0417 23:44:50.700332 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:44:50.701039 kubelet[2813]: E0417 23:44:50.701002 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:44:50.701341 kubelet[2813]: I0417 23:44:50.701319 2813 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:44:50.701422 kubelet[2813]: I0417 23:44:50.701405 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:44:50.701865 kubelet[2813]: I0417 23:44:50.701835 2813 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:44:50.705251 kubelet[2813]: I0417 23:44:50.704334 2813 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:44:50.721651 kubelet[2813]: I0417 23:44:50.721601 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:44:50.724097 kubelet[2813]: I0417 23:44:50.724067 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:44:50.724264 kubelet[2813]: I0417 23:44:50.724253 2813 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:44:50.724355 kubelet[2813]: I0417 23:44:50.724347 2813 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:44:50.724488 kubelet[2813]: E0417 23:44:50.724461 2813 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:44:50.732767 kubelet[2813]: E0417 23:44:50.732618 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:44:50.737778 kubelet[2813]: I0417 23:44:50.737341 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:44:50.737946 kubelet[2813]: I0417 23:44:50.737933 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:44:50.738184 kubelet[2813]: I0417 23:44:50.738164 2813 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:50.742294 kubelet[2813]: I0417 23:44:50.742261 2813 policy_none.go:49] "None policy: Start" Apr 17 23:44:50.742294 kubelet[2813]: I0417 23:44:50.742297 2813 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:44:50.742462 kubelet[2813]: I0417 23:44:50.742313 2813 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:44:50.745784 kubelet[2813]: I0417 23:44:50.745753 2813 policy_none.go:47] "Start" Apr 17 23:44:50.750486 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:44:50.759555 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:44:50.764081 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:44:50.772571 kubelet[2813]: E0417 23:44:50.771972 2813 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:44:50.772571 kubelet[2813]: I0417 23:44:50.772224 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:44:50.772571 kubelet[2813]: I0417 23:44:50.772241 2813 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:44:50.776412 kubelet[2813]: E0417 23:44:50.776374 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:44:50.776628 kubelet[2813]: E0417 23:44:50.776454 2813 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-162\" not found" Apr 17 23:44:50.776799 kubelet[2813]: I0417 23:44:50.776779 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:44:50.841024 systemd[1]: Created slice kubepods-burstable-pod8e5965976f76651999cd4c0f9a43cb65.slice - libcontainer container kubepods-burstable-pod8e5965976f76651999cd4c0f9a43cb65.slice. Apr 17 23:44:50.855140 kubelet[2813]: E0417 23:44:50.854786 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:50.858936 systemd[1]: Created slice kubepods-burstable-pod51ba0079d2689840fd5a4bd57436d762.slice - libcontainer container kubepods-burstable-pod51ba0079d2689840fd5a4bd57436d762.slice. Apr 17 23:44:50.861573 kubelet[2813]: E0417 23:44:50.861545 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:50.864395 systemd[1]: Created slice kubepods-burstable-pod1a24e08a7f9a6ffbca6eea970d08cdbb.slice - libcontainer container kubepods-burstable-pod1a24e08a7f9a6ffbca6eea970d08cdbb.slice. Apr 17 23:44:50.866295 kubelet[2813]: E0417 23:44:50.866267 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:50.875079 kubelet[2813]: I0417 23:44:50.874780 2813 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-162" Apr 17 23:44:50.875258 kubelet[2813]: E0417 23:44:50.875222 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.162:6443/api/v1/nodes\": dial tcp 172.31.19.162:6443: connect: connection refused" node="ip-172-31-19-162" Apr 17 23:44:50.901106 kubelet[2813]: E0417 23:44:50.900986 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-162?timeout=10s\": dial tcp 172.31.19.162:6443: connect: connection refused" interval="400ms" Apr 17 23:44:50.903423 kubelet[2813]: I0417 23:44:50.903133 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e5965976f76651999cd4c0f9a43cb65-ca-certs\") pod \"kube-apiserver-ip-172-31-19-162\" (UID: \"8e5965976f76651999cd4c0f9a43cb65\") " pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:50.903423 kubelet[2813]: I0417 23:44:50.903177 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e5965976f76651999cd4c0f9a43cb65-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-162\" (UID: \"8e5965976f76651999cd4c0f9a43cb65\") " pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:50.903423 kubelet[2813]: I0417 23:44:50.903202 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e5965976f76651999cd4c0f9a43cb65-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-162\" (UID: \"8e5965976f76651999cd4c0f9a43cb65\") " pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:50.903423 kubelet[2813]: I0417 23:44:50.903233 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:50.903423 kubelet[2813]: I0417 23:44:50.903270 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:50.903637 kubelet[2813]: I0417 23:44:50.903301 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:50.903637 kubelet[2813]: I0417 23:44:50.903331 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:50.903637 kubelet[2813]: I0417 23:44:50.903355 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:50.903637 kubelet[2813]: I0417 23:44:50.903378 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a24e08a7f9a6ffbca6eea970d08cdbb-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-162\" (UID: \"1a24e08a7f9a6ffbca6eea970d08cdbb\") " pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:51.077468 kubelet[2813]: I0417 23:44:51.077426 2813 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-162" Apr 17 23:44:51.077815 kubelet[2813]: E0417 23:44:51.077784 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.162:6443/api/v1/nodes\": dial tcp 172.31.19.162:6443: connect: connection refused" node="ip-172-31-19-162" Apr 17 23:44:51.160910 containerd[1988]: time="2026-04-17T23:44:51.160766951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-162,Uid:8e5965976f76651999cd4c0f9a43cb65,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:51.174780 containerd[1988]: time="2026-04-17T23:44:51.174627968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-162,Uid:1a24e08a7f9a6ffbca6eea970d08cdbb,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:51.175108 containerd[1988]: time="2026-04-17T23:44:51.174630238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-162,Uid:51ba0079d2689840fd5a4bd57436d762,Namespace:kube-system,Attempt:0,}" Apr 17 23:44:51.302298 kubelet[2813]: E0417 23:44:51.302259 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-162?timeout=10s\": dial tcp 172.31.19.162:6443: connect: connection refused" interval="800ms" Apr 17 23:44:51.480100 kubelet[2813]: I0417 23:44:51.480075 2813 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-162" Apr 17 23:44:51.480431 kubelet[2813]: E0417 23:44:51.480403 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.162:6443/api/v1/nodes\": dial tcp 172.31.19.162:6443: connect: connection refused" node="ip-172-31-19-162" Apr 17 23:44:51.653223 kubelet[2813]: E0417 23:44:51.653175 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.19.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-162&limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:44:51.674302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484866291.mount: Deactivated successfully. Apr 17 23:44:51.692103 containerd[1988]: time="2026-04-17T23:44:51.692042718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:51.694094 containerd[1988]: time="2026-04-17T23:44:51.694021134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 17 23:44:51.696147 containerd[1988]: time="2026-04-17T23:44:51.696103574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:51.698067 containerd[1988]: time="2026-04-17T23:44:51.698026578Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:51.700471 containerd[1988]: time="2026-04-17T23:44:51.700406219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:44:51.702680 containerd[1988]: time="2026-04-17T23:44:51.702631861Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:51.704452 containerd[1988]: time="2026-04-17T23:44:51.704127328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:44:51.707977 containerd[1988]: time="2026-04-17T23:44:51.707930714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:44:51.708804 containerd[1988]: time="2026-04-17T23:44:51.708762413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.882684ms" Apr 17 23:44:51.711526 containerd[1988]: time="2026-04-17T23:44:51.711480460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.720742ms" Apr 17 23:44:51.713562 containerd[1988]: time="2026-04-17T23:44:51.713502483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 552.640721ms" Apr 17 23:44:51.754135 kubelet[2813]: E0417 23:44:51.753977 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.19.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:44:51.928466 containerd[1988]: time="2026-04-17T23:44:51.928037015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:51.928466 containerd[1988]: time="2026-04-17T23:44:51.928093929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:51.928466 containerd[1988]: time="2026-04-17T23:44:51.928108933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:51.928466 containerd[1988]: time="2026-04-17T23:44:51.928198998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940011295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940081618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940118364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940238208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940830623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940886168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:44:51.941319 containerd[1988]: time="2026-04-17T23:44:51.940918732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:51.942576 containerd[1988]: time="2026-04-17T23:44:51.941765548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:44:51.961494 kubelet[2813]: E0417 23:44:51.960646 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.19.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:44:51.977570 systemd[1]: Started cri-containerd-947bf68da754584604b0b65783ef8f97d12bebf934332654c286fed4ecb06ae4.scope - libcontainer container 947bf68da754584604b0b65783ef8f97d12bebf934332654c286fed4ecb06ae4. Apr 17 23:44:51.993395 systemd[1]: Started cri-containerd-3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8.scope - libcontainer container 3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8. Apr 17 23:44:52.016095 systemd[1]: Started cri-containerd-a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9.scope - libcontainer container a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9. Apr 17 23:44:52.097721 containerd[1988]: time="2026-04-17T23:44:52.095715683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-162,Uid:8e5965976f76651999cd4c0f9a43cb65,Namespace:kube-system,Attempt:0,} returns sandbox id \"947bf68da754584604b0b65783ef8f97d12bebf934332654c286fed4ecb06ae4\"" Apr 17 23:44:52.103868 kubelet[2813]: E0417 23:44:52.103523 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-162?timeout=10s\": dial tcp 172.31.19.162:6443: connect: connection refused" interval="1.6s" Apr 17 23:44:52.113655 containerd[1988]: time="2026-04-17T23:44:52.113555744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-162,Uid:51ba0079d2689840fd5a4bd57436d762,Namespace:kube-system,Attempt:0,} returns sandbox id \"3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8\"" Apr 17 23:44:52.123654 containerd[1988]: time="2026-04-17T23:44:52.123535898Z" level=info msg="CreateContainer within sandbox \"947bf68da754584604b0b65783ef8f97d12bebf934332654c286fed4ecb06ae4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:44:52.126889 containerd[1988]: time="2026-04-17T23:44:52.126858559Z" level=info msg="CreateContainer within sandbox \"3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:44:52.134186 containerd[1988]: time="2026-04-17T23:44:52.134007258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-162,Uid:1a24e08a7f9a6ffbca6eea970d08cdbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9\"" Apr 17 23:44:52.141336 containerd[1988]: time="2026-04-17T23:44:52.141292169Z" level=info msg="CreateContainer within sandbox \"a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:44:52.172904 containerd[1988]: time="2026-04-17T23:44:52.172853844Z" level=info msg="CreateContainer within sandbox \"3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c\"" Apr 17 23:44:52.174087 containerd[1988]: time="2026-04-17T23:44:52.173880633Z" level=info msg="StartContainer for \"03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c\"" Apr 17 23:44:52.176211 containerd[1988]: time="2026-04-17T23:44:52.176175202Z" level=info msg="CreateContainer within sandbox \"947bf68da754584604b0b65783ef8f97d12bebf934332654c286fed4ecb06ae4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0168a6e62bf0ede8638f1ae4a6e4f41c5d04b52f2ecc92a1103d89ed1443aa11\"" Apr 17 23:44:52.176845 containerd[1988]: time="2026-04-17T23:44:52.176817668Z" level=info msg="StartContainer for \"0168a6e62bf0ede8638f1ae4a6e4f41c5d04b52f2ecc92a1103d89ed1443aa11\"" Apr 17 23:44:52.180726 containerd[1988]: time="2026-04-17T23:44:52.180438821Z" level=info msg="CreateContainer within sandbox \"a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed\"" Apr 17 23:44:52.181731 containerd[1988]: time="2026-04-17T23:44:52.181316089Z" level=info msg="StartContainer for \"be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed\"" Apr 17 23:44:52.226512 systemd[1]: Started cri-containerd-0168a6e62bf0ede8638f1ae4a6e4f41c5d04b52f2ecc92a1103d89ed1443aa11.scope - libcontainer container 0168a6e62bf0ede8638f1ae4a6e4f41c5d04b52f2ecc92a1103d89ed1443aa11. Apr 17 23:44:52.248538 systemd[1]: Started cri-containerd-03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c.scope - libcontainer container 03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c. Apr 17 23:44:52.250768 kubelet[2813]: E0417 23:44:52.250623 2813 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.19.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.162:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:44:52.260348 systemd[1]: Started cri-containerd-be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed.scope - libcontainer container be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed. Apr 17 23:44:52.285048 kubelet[2813]: I0417 23:44:52.284932 2813 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-162" Apr 17 23:44:52.285344 kubelet[2813]: E0417 23:44:52.285309 2813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.162:6443/api/v1/nodes\": dial tcp 172.31.19.162:6443: connect: connection refused" node="ip-172-31-19-162" Apr 17 23:44:52.329725 containerd[1988]: time="2026-04-17T23:44:52.327931665Z" level=info msg="StartContainer for \"0168a6e62bf0ede8638f1ae4a6e4f41c5d04b52f2ecc92a1103d89ed1443aa11\" returns successfully" Apr 17 23:44:52.345012 containerd[1988]: time="2026-04-17T23:44:52.344967557Z" level=info msg="StartContainer for \"03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c\" returns successfully" Apr 17 23:44:52.359978 containerd[1988]: time="2026-04-17T23:44:52.359929506Z" level=info msg="StartContainer for \"be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed\" returns successfully" Apr 17 23:44:52.763716 kubelet[2813]: E0417 23:44:52.762075 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:52.767552 kubelet[2813]: E0417 23:44:52.767515 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:52.771469 kubelet[2813]: E0417 23:44:52.771441 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:53.773849 kubelet[2813]: E0417 23:44:53.773539 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:53.773849 kubelet[2813]: E0417 23:44:53.773553 2813 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:53.889835 kubelet[2813]: I0417 23:44:53.888951 2813 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-162" Apr 17 23:44:54.147831 kubelet[2813]: E0417 23:44:54.147713 2813 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-162\" not found" node="ip-172-31-19-162" Apr 17 23:44:54.315922 kubelet[2813]: I0417 23:44:54.315755 2813 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-162" Apr 17 23:44:54.400587 kubelet[2813]: I0417 23:44:54.400136 2813 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:54.408144 kubelet[2813]: E0417 23:44:54.408101 2813 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:54.408144 kubelet[2813]: I0417 23:44:54.408135 2813 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:54.409978 kubelet[2813]: E0417 23:44:54.409936 2813 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:54.409978 kubelet[2813]: I0417 23:44:54.409964 2813 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:54.411563 kubelet[2813]: E0417 23:44:54.411527 2813 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:54.674201 kubelet[2813]: I0417 23:44:54.674043 2813 apiserver.go:52] "Watching apiserver" Apr 17 23:44:54.701195 kubelet[2813]: I0417 23:44:54.701128 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:44:54.952225 kubelet[2813]: I0417 23:44:54.952099 2813 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:54.954322 kubelet[2813]: E0417 23:44:54.954292 2813 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-162\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:56.404147 systemd[1]: Reloading requested from client PID 3099 ('systemctl') (unit session-7.scope)... Apr 17 23:44:56.404548 systemd[1]: Reloading... Apr 17 23:44:56.529726 zram_generator::config[3135]: No configuration found. Apr 17 23:44:56.665680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:44:56.815156 systemd[1]: Reloading finished in 409 ms. Apr 17 23:44:56.880530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:56.900247 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:44:56.900547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:56.900619 systemd[1]: kubelet.service: Consumed 1.691s CPU time, 124.3M memory peak, 0B memory swap peak. Apr 17 23:44:56.910027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:44:57.196420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:44:57.211434 (kubelet)[3199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:44:57.286996 kubelet[3199]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:44:57.288744 kubelet[3199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:44:57.288744 kubelet[3199]: I0417 23:44:57.287523 3199 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:44:57.294946 kubelet[3199]: I0417 23:44:57.294917 3199 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 17 23:44:57.295090 kubelet[3199]: I0417 23:44:57.295081 3199 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:44:57.296190 kubelet[3199]: I0417 23:44:57.296167 3199 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:44:57.296292 kubelet[3199]: I0417 23:44:57.296282 3199 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:44:57.296615 kubelet[3199]: I0417 23:44:57.296582 3199 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:44:57.297924 kubelet[3199]: I0417 23:44:57.297907 3199 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:44:57.306216 kubelet[3199]: I0417 23:44:57.306188 3199 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:44:57.312119 kubelet[3199]: E0417 23:44:57.312090 3199 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:44:57.312359 kubelet[3199]: I0417 23:44:57.312346 3199 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:44:57.314847 kubelet[3199]: I0417 23:44:57.314827 3199 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:44:57.315973 kubelet[3199]: I0417 23:44:57.315903 3199 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:44:57.316347 kubelet[3199]: I0417 23:44:57.316121 3199 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:44:57.316486 kubelet[3199]: I0417 23:44:57.316471 3199 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:44:57.316532 kubelet[3199]: I0417 23:44:57.316527 3199 container_manager_linux.go:306] "Creating device plugin manager" Apr 17 23:44:57.316599 kubelet[3199]: I0417 23:44:57.316593 3199 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:44:57.316869 kubelet[3199]: I0417 23:44:57.316856 3199 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:57.317085 kubelet[3199]: I0417 23:44:57.317073 3199 kubelet.go:475] "Attempting to sync node with API server" Apr 17 23:44:57.317186 kubelet[3199]: I0417 23:44:57.317176 3199 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:44:57.317273 kubelet[3199]: I0417 23:44:57.317265 3199 kubelet.go:387] "Adding apiserver pod source" Apr 17 23:44:57.317402 kubelet[3199]: I0417 23:44:57.317337 3199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:44:57.318858 kubelet[3199]: I0417 23:44:57.318422 3199 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:44:57.319306 kubelet[3199]: I0417 23:44:57.319289 3199 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:44:57.319406 kubelet[3199]: I0417 23:44:57.319396 3199 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:44:57.336233 kubelet[3199]: I0417 23:44:57.336201 3199 server.go:1262] "Started kubelet" Apr 17 23:44:57.341640 kubelet[3199]: I0417 23:44:57.341604 3199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:44:57.343267 kubelet[3199]: I0417 23:44:57.343195 3199 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:44:57.343812 kubelet[3199]: I0417 23:44:57.343727 3199 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:44:57.344855 kubelet[3199]: I0417 23:44:57.344078 3199 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:44:57.347651 kubelet[3199]: I0417 23:44:57.347627 3199 server.go:310] "Adding debug handlers to kubelet server" Apr 17 23:44:57.349071 kubelet[3199]: I0417 23:44:57.349049 3199 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:44:57.353373 kubelet[3199]: I0417 23:44:57.353313 3199 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:44:57.357178 kubelet[3199]: I0417 23:44:57.357150 3199 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 17 23:44:57.359753 kubelet[3199]: I0417 23:44:57.358912 3199 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:44:57.359753 kubelet[3199]: I0417 23:44:57.359088 3199 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:44:57.364955 kubelet[3199]: I0417 23:44:57.364915 3199 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:44:57.365088 kubelet[3199]: I0417 23:44:57.365061 3199 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:44:57.371921 kubelet[3199]: I0417 23:44:57.371862 3199 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:44:57.375388 kubelet[3199]: E0417 23:44:57.375357 3199 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:44:57.383567 kubelet[3199]: I0417 23:44:57.383344 3199 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:44:57.386246 kubelet[3199]: I0417 23:44:57.385285 3199 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:44:57.386444 kubelet[3199]: I0417 23:44:57.386427 3199 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 17 23:44:57.386542 kubelet[3199]: I0417 23:44:57.386520 3199 kubelet.go:2428] "Starting kubelet main sync loop" Apr 17 23:44:57.387716 kubelet[3199]: E0417 23:44:57.386820 3199 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:44:57.438550 sudo[3234]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 17 23:44:57.439138 sudo[3234]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 17 23:44:57.455874 kubelet[3199]: I0417 23:44:57.455475 3199 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:44:57.455874 kubelet[3199]: I0417 23:44:57.455493 3199 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:44:57.455874 kubelet[3199]: I0417 23:44:57.455515 3199 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:44:57.455874 kubelet[3199]: I0417 23:44:57.455683 3199 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:44:57.457737 kubelet[3199]: I0417 23:44:57.457405 3199 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:44:57.457737 kubelet[3199]: I0417 23:44:57.457460 3199 policy_none.go:49] "None policy: Start" Apr 17 23:44:57.457737 kubelet[3199]: I0417 23:44:57.457476 3199 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:44:57.457737 kubelet[3199]: I0417 23:44:57.457495 3199 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:44:57.457737 kubelet[3199]: I0417 23:44:57.457662 3199 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:44:57.457737 kubelet[3199]: I0417 23:44:57.457674 3199 policy_none.go:47] "Start" Apr 17 23:44:57.467528 kubelet[3199]: E0417 23:44:57.467004 3199 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:44:57.467528 kubelet[3199]: I0417 23:44:57.467223 3199 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:44:57.467528 kubelet[3199]: I0417 23:44:57.467235 3199 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:44:57.473629 kubelet[3199]: I0417 23:44:57.473589 3199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:44:57.482008 kubelet[3199]: E0417 23:44:57.481963 3199 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:44:57.498189 kubelet[3199]: I0417 23:44:57.497154 3199 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:57.513719 kubelet[3199]: I0417 23:44:57.510672 3199 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:57.519916 kubelet[3199]: I0417 23:44:57.518183 3199 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:57.559321 kubelet[3199]: I0417 23:44:57.559281 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:57.559857 kubelet[3199]: I0417 23:44:57.559767 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:57.560129 kubelet[3199]: I0417 23:44:57.560110 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1a24e08a7f9a6ffbca6eea970d08cdbb-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-162\" (UID: \"1a24e08a7f9a6ffbca6eea970d08cdbb\") " pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:57.560268 kubelet[3199]: I0417 23:44:57.560254 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8e5965976f76651999cd4c0f9a43cb65-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-162\" (UID: \"8e5965976f76651999cd4c0f9a43cb65\") " pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:57.560393 kubelet[3199]: I0417 23:44:57.560377 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8e5965976f76651999cd4c0f9a43cb65-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-162\" (UID: \"8e5965976f76651999cd4c0f9a43cb65\") " pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:57.560523 kubelet[3199]: I0417 23:44:57.560492 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:57.560855 kubelet[3199]: I0417 23:44:57.560837 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:57.561014 kubelet[3199]: I0417 23:44:57.560992 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51ba0079d2689840fd5a4bd57436d762-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-162\" (UID: \"51ba0079d2689840fd5a4bd57436d762\") " pod="kube-system/kube-controller-manager-ip-172-31-19-162" Apr 17 23:44:57.561220 kubelet[3199]: I0417 23:44:57.561113 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8e5965976f76651999cd4c0f9a43cb65-ca-certs\") pod \"kube-apiserver-ip-172-31-19-162\" (UID: \"8e5965976f76651999cd4c0f9a43cb65\") " pod="kube-system/kube-apiserver-ip-172-31-19-162" Apr 17 23:44:57.585918 kubelet[3199]: I0417 23:44:57.585890 3199 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-162" Apr 17 23:44:57.601729 kubelet[3199]: I0417 23:44:57.601493 3199 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-162" Apr 17 23:44:57.601729 kubelet[3199]: I0417 23:44:57.601587 3199 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-162" Apr 17 23:44:58.141910 sudo[3234]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:58.325305 kubelet[3199]: I0417 23:44:58.325007 3199 apiserver.go:52] "Watching apiserver" Apr 17 23:44:58.360720 kubelet[3199]: I0417 23:44:58.360009 3199 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:44:58.426805 kubelet[3199]: I0417 23:44:58.426586 3199 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:58.450352 kubelet[3199]: E0417 23:44:58.450102 3199 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-162\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-162" Apr 17 23:44:58.466628 kubelet[3199]: I0417 23:44:58.466242 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-162" podStartSLOduration=1.466223675 podStartE2EDuration="1.466223675s" podCreationTimestamp="2026-04-17 23:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:58.465180403 +0000 UTC m=+1.245325082" watchObservedRunningTime="2026-04-17 23:44:58.466223675 +0000 UTC m=+1.246368331" Apr 17 23:44:58.495782 kubelet[3199]: I0417 23:44:58.495531 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-162" podStartSLOduration=1.495508807 podStartE2EDuration="1.495508807s" podCreationTimestamp="2026-04-17 23:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:58.484412595 +0000 UTC m=+1.264557247" watchObservedRunningTime="2026-04-17 23:44:58.495508807 +0000 UTC m=+1.275653452" Apr 17 23:44:58.503306 kubelet[3199]: I0417 23:44:58.502812 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-162" podStartSLOduration=1.502792236 podStartE2EDuration="1.502792236s" podCreationTimestamp="2026-04-17 23:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:44:58.502267722 +0000 UTC m=+1.282412375" watchObservedRunningTime="2026-04-17 23:44:58.502792236 +0000 UTC m=+1.282936888" Apr 17 23:44:59.701166 sudo[2304]: pam_unix(sudo:session): session closed for user root Apr 17 23:44:59.861272 sshd[2298]: pam_unix(sshd:session): session closed for user core Apr 17 23:44:59.865992 systemd[1]: sshd@6-172.31.19.162:22-20.229.252.112:53284.service: Deactivated successfully. Apr 17 23:44:59.868246 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:44:59.868453 systemd[1]: session-7.scope: Consumed 5.964s CPU time, 152.4M memory peak, 0B memory swap peak. Apr 17 23:44:59.869561 systemd-logind[1967]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:44:59.870974 systemd-logind[1967]: Removed session 7. Apr 17 23:45:01.847345 kubelet[3199]: I0417 23:45:01.847116 3199 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:45:01.849691 containerd[1988]: time="2026-04-17T23:45:01.848653362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:45:01.850246 kubelet[3199]: I0417 23:45:01.848943 3199 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:45:03.006810 systemd[1]: Created slice kubepods-burstable-pod082682f2_c210_4d89_813d_1d82ed4c11a0.slice - libcontainer container kubepods-burstable-pod082682f2_c210_4d89_813d_1d82ed4c11a0.slice. Apr 17 23:45:03.021380 systemd[1]: Created slice kubepods-besteffort-pod0d357464_ed4e_4805_8d7a_fdae294a62d8.slice - libcontainer container kubepods-besteffort-pod0d357464_ed4e_4805_8d7a_fdae294a62d8.slice. Apr 17 23:45:03.027512 kubelet[3199]: I0417 23:45:03.027480 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-cgroup\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030556 kubelet[3199]: I0417 23:45:03.028799 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-config-path\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030556 kubelet[3199]: I0417 23:45:03.028868 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-kernel\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030556 kubelet[3199]: I0417 23:45:03.028913 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-bpf-maps\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030556 kubelet[3199]: I0417 23:45:03.028936 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/082682f2-c210-4d89-813d-1d82ed4c11a0-clustermesh-secrets\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030556 kubelet[3199]: I0417 23:45:03.028957 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-hubble-tls\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030885 kubelet[3199]: I0417 23:45:03.028992 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gq9r\" (UniqueName: \"kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-kube-api-access-4gq9r\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030885 kubelet[3199]: I0417 23:45:03.029017 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d357464-ed4e-4805-8d7a-fdae294a62d8-xtables-lock\") pod \"kube-proxy-wcq7b\" (UID: \"0d357464-ed4e-4805-8d7a-fdae294a62d8\") " pod="kube-system/kube-proxy-wcq7b" Apr 17 23:45:03.030885 kubelet[3199]: I0417 23:45:03.029181 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-etc-cni-netd\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030885 kubelet[3199]: I0417 23:45:03.029208 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-net\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.030885 kubelet[3199]: I0417 23:45:03.029242 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d357464-ed4e-4805-8d7a-fdae294a62d8-kube-proxy\") pod \"kube-proxy-wcq7b\" (UID: \"0d357464-ed4e-4805-8d7a-fdae294a62d8\") " pod="kube-system/kube-proxy-wcq7b" Apr 17 23:45:03.031089 kubelet[3199]: I0417 23:45:03.029265 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnlw\" (UniqueName: \"kubernetes.io/projected/0d357464-ed4e-4805-8d7a-fdae294a62d8-kube-api-access-6xnlw\") pod \"kube-proxy-wcq7b\" (UID: \"0d357464-ed4e-4805-8d7a-fdae294a62d8\") " pod="kube-system/kube-proxy-wcq7b" Apr 17 23:45:03.031089 kubelet[3199]: I0417 23:45:03.029287 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cni-path\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.031089 kubelet[3199]: I0417 23:45:03.029313 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-lib-modules\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.031089 kubelet[3199]: I0417 23:45:03.029333 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-xtables-lock\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.031089 kubelet[3199]: I0417 23:45:03.029357 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d357464-ed4e-4805-8d7a-fdae294a62d8-lib-modules\") pod \"kube-proxy-wcq7b\" (UID: \"0d357464-ed4e-4805-8d7a-fdae294a62d8\") " pod="kube-system/kube-proxy-wcq7b" Apr 17 23:45:03.031089 kubelet[3199]: I0417 23:45:03.029388 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-run\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.031318 kubelet[3199]: I0417 23:45:03.029411 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-hostproc\") pod \"cilium-brz64\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " pod="kube-system/cilium-brz64" Apr 17 23:45:03.131739 systemd[1]: Created slice kubepods-besteffort-pod410ee967_0222_4877_b051_a9f67fe1a8e0.slice - libcontainer container kubepods-besteffort-pod410ee967_0222_4877_b051_a9f67fe1a8e0.slice. Apr 17 23:45:03.236132 kubelet[3199]: I0417 23:45:03.236077 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqwfw\" (UniqueName: \"kubernetes.io/projected/410ee967-0222-4877-b051-a9f67fe1a8e0-kube-api-access-sqwfw\") pod \"cilium-operator-6f9c7c5859-8rmxp\" (UID: \"410ee967-0222-4877-b051-a9f67fe1a8e0\") " pod="kube-system/cilium-operator-6f9c7c5859-8rmxp" Apr 17 23:45:03.236132 kubelet[3199]: I0417 23:45:03.236141 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/410ee967-0222-4877-b051-a9f67fe1a8e0-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-8rmxp\" (UID: \"410ee967-0222-4877-b051-a9f67fe1a8e0\") " pod="kube-system/cilium-operator-6f9c7c5859-8rmxp" Apr 17 23:45:03.319633 containerd[1988]: time="2026-04-17T23:45:03.319522537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brz64,Uid:082682f2-c210-4d89-813d-1d82ed4c11a0,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:03.339662 containerd[1988]: time="2026-04-17T23:45:03.338946443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wcq7b,Uid:0d357464-ed4e-4805-8d7a-fdae294a62d8,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:03.371959 containerd[1988]: time="2026-04-17T23:45:03.371276634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:03.371959 containerd[1988]: time="2026-04-17T23:45:03.371359259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:03.371959 containerd[1988]: time="2026-04-17T23:45:03.371381308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:03.371959 containerd[1988]: time="2026-04-17T23:45:03.371530609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:03.408451 containerd[1988]: time="2026-04-17T23:45:03.407684088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:03.408849 containerd[1988]: time="2026-04-17T23:45:03.408482540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:03.408849 containerd[1988]: time="2026-04-17T23:45:03.408537512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:03.408849 containerd[1988]: time="2026-04-17T23:45:03.408673067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:03.412228 systemd[1]: Started cri-containerd-7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0.scope - libcontainer container 7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0. Apr 17 23:45:03.439033 systemd[1]: Started cri-containerd-127014081066e579fcae16465faccd73bb8f76fc1d72034f14c6279edb06951d.scope - libcontainer container 127014081066e579fcae16465faccd73bb8f76fc1d72034f14c6279edb06951d. Apr 17 23:45:03.470869 containerd[1988]: time="2026-04-17T23:45:03.470692471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-8rmxp,Uid:410ee967-0222-4877-b051-a9f67fe1a8e0,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:03.474570 containerd[1988]: time="2026-04-17T23:45:03.474531356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-brz64,Uid:082682f2-c210-4d89-813d-1d82ed4c11a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\"" Apr 17 23:45:03.481545 containerd[1988]: time="2026-04-17T23:45:03.481427137Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 17 23:45:03.499754 containerd[1988]: time="2026-04-17T23:45:03.498958727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wcq7b,Uid:0d357464-ed4e-4805-8d7a-fdae294a62d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"127014081066e579fcae16465faccd73bb8f76fc1d72034f14c6279edb06951d\"" Apr 17 23:45:03.508124 containerd[1988]: time="2026-04-17T23:45:03.508073898Z" level=info msg="CreateContainer within sandbox \"127014081066e579fcae16465faccd73bb8f76fc1d72034f14c6279edb06951d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:45:03.526266 containerd[1988]: time="2026-04-17T23:45:03.525442248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:03.526266 containerd[1988]: time="2026-04-17T23:45:03.525498314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:03.526266 containerd[1988]: time="2026-04-17T23:45:03.525517586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:03.526266 containerd[1988]: time="2026-04-17T23:45:03.525630056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:03.546019 containerd[1988]: time="2026-04-17T23:45:03.545980040Z" level=info msg="CreateContainer within sandbox \"127014081066e579fcae16465faccd73bb8f76fc1d72034f14c6279edb06951d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ede76c4587a9749ed68eb78d269797bd9b49f2b04d397607c41803ebdee83846\"" Apr 17 23:45:03.549327 containerd[1988]: time="2026-04-17T23:45:03.546960088Z" level=info msg="StartContainer for \"ede76c4587a9749ed68eb78d269797bd9b49f2b04d397607c41803ebdee83846\"" Apr 17 23:45:03.548919 systemd[1]: Started cri-containerd-2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d.scope - libcontainer container 2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d. Apr 17 23:45:03.601944 systemd[1]: Started cri-containerd-ede76c4587a9749ed68eb78d269797bd9b49f2b04d397607c41803ebdee83846.scope - libcontainer container ede76c4587a9749ed68eb78d269797bd9b49f2b04d397607c41803ebdee83846. Apr 17 23:45:03.635051 update_engine[1968]: I20260417 23:45:03.634989 1968 update_attempter.cc:509] Updating boot flags... Apr 17 23:45:03.637653 containerd[1988]: time="2026-04-17T23:45:03.635322864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-8rmxp,Uid:410ee967-0222-4877-b051-a9f67fe1a8e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\"" Apr 17 23:45:03.676921 containerd[1988]: time="2026-04-17T23:45:03.676872917Z" level=info msg="StartContainer for \"ede76c4587a9749ed68eb78d269797bd9b49f2b04d397607c41803ebdee83846\" returns successfully" Apr 17 23:45:03.714213 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3448) Apr 17 23:45:04.944324 kubelet[3199]: I0417 23:45:04.944253 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wcq7b" podStartSLOduration=2.944194251 podStartE2EDuration="2.944194251s" podCreationTimestamp="2026-04-17 23:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:04.485729782 +0000 UTC m=+7.265874442" watchObservedRunningTime="2026-04-17 23:45:04.944194251 +0000 UTC m=+7.724338901" Apr 17 23:45:11.433609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245935573.mount: Deactivated successfully. Apr 17 23:45:14.099210 containerd[1988]: time="2026-04-17T23:45:14.099073732Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.61759358s" Apr 17 23:45:14.099210 containerd[1988]: time="2026-04-17T23:45:14.099142264Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 17 23:45:14.109014 containerd[1988]: time="2026-04-17T23:45:14.046180664Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 17 23:45:14.109014 containerd[1988]: time="2026-04-17T23:45:14.107209460Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:14.112791 containerd[1988]: time="2026-04-17T23:45:14.112748069Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:14.153736 containerd[1988]: time="2026-04-17T23:45:14.152999894Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:45:14.163827 containerd[1988]: time="2026-04-17T23:45:14.162647703Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 17 23:45:14.248049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13552861.mount: Deactivated successfully. Apr 17 23:45:14.253841 containerd[1988]: time="2026-04-17T23:45:14.253794058Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\"" Apr 17 23:45:14.254969 containerd[1988]: time="2026-04-17T23:45:14.254931450Z" level=info msg="StartContainer for \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\"" Apr 17 23:45:14.363746 systemd[1]: run-containerd-runc-k8s.io-7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7-runc.Vwysk6.mount: Deactivated successfully. Apr 17 23:45:14.372901 systemd[1]: Started cri-containerd-7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7.scope - libcontainer container 7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7. Apr 17 23:45:14.412150 containerd[1988]: time="2026-04-17T23:45:14.412095673Z" level=info msg="StartContainer for \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\" returns successfully" Apr 17 23:45:14.422384 systemd[1]: cri-containerd-7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7.scope: Deactivated successfully. Apr 17 23:45:14.525767 containerd[1988]: time="2026-04-17T23:45:14.503582201Z" level=info msg="shim disconnected" id=7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7 namespace=k8s.io Apr 17 23:45:14.526015 containerd[1988]: time="2026-04-17T23:45:14.525772716Z" level=warning msg="cleaning up after shim disconnected" id=7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7 namespace=k8s.io Apr 17 23:45:14.526015 containerd[1988]: time="2026-04-17T23:45:14.525793984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:15.242477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7-rootfs.mount: Deactivated successfully. Apr 17 23:45:15.431557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3169032260.mount: Deactivated successfully. Apr 17 23:45:15.532896 containerd[1988]: time="2026-04-17T23:45:15.532765186Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:45:15.556249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985296583.mount: Deactivated successfully. Apr 17 23:45:15.565854 containerd[1988]: time="2026-04-17T23:45:15.565803486Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\"" Apr 17 23:45:15.569633 containerd[1988]: time="2026-04-17T23:45:15.568361056Z" level=info msg="StartContainer for \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\"" Apr 17 23:45:15.610947 systemd[1]: Started cri-containerd-0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650.scope - libcontainer container 0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650. Apr 17 23:45:15.650098 containerd[1988]: time="2026-04-17T23:45:15.649838582Z" level=info msg="StartContainer for \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\" returns successfully" Apr 17 23:45:15.664868 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:45:15.665254 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:45:15.665331 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:45:15.673861 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:45:15.675436 systemd[1]: cri-containerd-0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650.scope: Deactivated successfully. Apr 17 23:45:15.715809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:45:15.757651 containerd[1988]: time="2026-04-17T23:45:15.757563503Z" level=info msg="shim disconnected" id=0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650 namespace=k8s.io Apr 17 23:45:15.757651 containerd[1988]: time="2026-04-17T23:45:15.757634127Z" level=warning msg="cleaning up after shim disconnected" id=0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650 namespace=k8s.io Apr 17 23:45:15.757651 containerd[1988]: time="2026-04-17T23:45:15.757650397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:16.505278 containerd[1988]: time="2026-04-17T23:45:16.505140892Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:16.511285 containerd[1988]: time="2026-04-17T23:45:16.511195968Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 17 23:45:16.522720 containerd[1988]: time="2026-04-17T23:45:16.520678062Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:45:16.523626 containerd[1988]: time="2026-04-17T23:45:16.523590228Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:45:16.530473 containerd[1988]: time="2026-04-17T23:45:16.529307991Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.366584608s" Apr 17 23:45:16.530473 containerd[1988]: time="2026-04-17T23:45:16.529388254Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 17 23:45:16.546472 containerd[1988]: time="2026-04-17T23:45:16.546417211Z" level=info msg="CreateContainer within sandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 17 23:45:16.592732 containerd[1988]: time="2026-04-17T23:45:16.592671320Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\"" Apr 17 23:45:16.594921 containerd[1988]: time="2026-04-17T23:45:16.593482338Z" level=info msg="StartContainer for \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\"" Apr 17 23:45:16.600890 containerd[1988]: time="2026-04-17T23:45:16.600843945Z" level=info msg="CreateContainer within sandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\"" Apr 17 23:45:16.602806 containerd[1988]: time="2026-04-17T23:45:16.602774202Z" level=info msg="StartContainer for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\"" Apr 17 23:45:16.640004 systemd[1]: Started cri-containerd-0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c.scope - libcontainer container 0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c. Apr 17 23:45:16.658648 systemd[1]: Started cri-containerd-23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697.scope - libcontainer container 23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697. Apr 17 23:45:16.701832 containerd[1988]: time="2026-04-17T23:45:16.701562206Z" level=info msg="StartContainer for \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\" returns successfully" Apr 17 23:45:16.701832 containerd[1988]: time="2026-04-17T23:45:16.701671579Z" level=info msg="StartContainer for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" returns successfully" Apr 17 23:45:16.708307 systemd[1]: cri-containerd-0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c.scope: Deactivated successfully. Apr 17 23:45:16.766919 containerd[1988]: time="2026-04-17T23:45:16.766573268Z" level=info msg="shim disconnected" id=0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c namespace=k8s.io Apr 17 23:45:16.766919 containerd[1988]: time="2026-04-17T23:45:16.766637155Z" level=warning msg="cleaning up after shim disconnected" id=0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c namespace=k8s.io Apr 17 23:45:16.766919 containerd[1988]: time="2026-04-17T23:45:16.766649403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:17.533048 containerd[1988]: time="2026-04-17T23:45:17.532992865Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:45:17.565345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020704378.mount: Deactivated successfully. Apr 17 23:45:17.573218 containerd[1988]: time="2026-04-17T23:45:17.573172769Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\"" Apr 17 23:45:17.576733 containerd[1988]: time="2026-04-17T23:45:17.574876732Z" level=info msg="StartContainer for \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\"" Apr 17 23:45:17.659020 systemd[1]: Started cri-containerd-b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8.scope - libcontainer container b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8. Apr 17 23:45:17.702854 systemd[1]: cri-containerd-b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8.scope: Deactivated successfully. Apr 17 23:45:17.706897 containerd[1988]: time="2026-04-17T23:45:17.706224386Z" level=info msg="StartContainer for \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\" returns successfully" Apr 17 23:45:17.751085 containerd[1988]: time="2026-04-17T23:45:17.750933876Z" level=info msg="shim disconnected" id=b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8 namespace=k8s.io Apr 17 23:45:17.751778 containerd[1988]: time="2026-04-17T23:45:17.751743223Z" level=warning msg="cleaning up after shim disconnected" id=b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8 namespace=k8s.io Apr 17 23:45:17.752061 containerd[1988]: time="2026-04-17T23:45:17.751912057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:45:18.244978 systemd[1]: run-containerd-runc-k8s.io-b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8-runc.TxKSgn.mount: Deactivated successfully. Apr 17 23:45:18.245114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8-rootfs.mount: Deactivated successfully. Apr 17 23:45:18.541890 containerd[1988]: time="2026-04-17T23:45:18.541187989Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:45:18.565073 kubelet[3199]: I0417 23:45:18.564984 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-8rmxp" podStartSLOduration=2.671939451 podStartE2EDuration="15.564960072s" podCreationTimestamp="2026-04-17 23:45:03 +0000 UTC" firstStartedPulling="2026-04-17 23:45:03.63710644 +0000 UTC m=+6.417251088" lastFinishedPulling="2026-04-17 23:45:16.530127076 +0000 UTC m=+19.310271709" observedRunningTime="2026-04-17 23:45:17.749414242 +0000 UTC m=+20.529558897" watchObservedRunningTime="2026-04-17 23:45:18.564960072 +0000 UTC m=+21.345104727" Apr 17 23:45:18.592626 containerd[1988]: time="2026-04-17T23:45:18.592573858Z" level=info msg="CreateContainer within sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\"" Apr 17 23:45:18.594788 containerd[1988]: time="2026-04-17T23:45:18.593568484Z" level=info msg="StartContainer for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\"" Apr 17 23:45:18.631984 systemd[1]: Started cri-containerd-ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58.scope - libcontainer container ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58. Apr 17 23:45:18.666317 containerd[1988]: time="2026-04-17T23:45:18.666239866Z" level=info msg="StartContainer for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" returns successfully" Apr 17 23:45:18.951779 kubelet[3199]: I0417 23:45:18.951654 3199 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 17 23:45:19.023074 systemd[1]: Created slice kubepods-burstable-podfa600c94_1f27_4214_83d1_0c5f883bf096.slice - libcontainer container kubepods-burstable-podfa600c94_1f27_4214_83d1_0c5f883bf096.slice. Apr 17 23:45:19.036430 systemd[1]: Created slice kubepods-burstable-podbe238acd_d209_4dcf_a1a9_31aca086f6bc.slice - libcontainer container kubepods-burstable-podbe238acd_d209_4dcf_a1a9_31aca086f6bc.slice. Apr 17 23:45:19.066938 kubelet[3199]: I0417 23:45:19.066891 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa600c94-1f27-4214-83d1-0c5f883bf096-config-volume\") pod \"coredns-66bc5c9577-znpmx\" (UID: \"fa600c94-1f27-4214-83d1-0c5f883bf096\") " pod="kube-system/coredns-66bc5c9577-znpmx" Apr 17 23:45:19.066938 kubelet[3199]: I0417 23:45:19.066942 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw66p\" (UniqueName: \"kubernetes.io/projected/be238acd-d209-4dcf-a1a9-31aca086f6bc-kube-api-access-rw66p\") pod \"coredns-66bc5c9577-nw49s\" (UID: \"be238acd-d209-4dcf-a1a9-31aca086f6bc\") " pod="kube-system/coredns-66bc5c9577-nw49s" Apr 17 23:45:19.067181 kubelet[3199]: I0417 23:45:19.066973 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be238acd-d209-4dcf-a1a9-31aca086f6bc-config-volume\") pod \"coredns-66bc5c9577-nw49s\" (UID: \"be238acd-d209-4dcf-a1a9-31aca086f6bc\") " pod="kube-system/coredns-66bc5c9577-nw49s" Apr 17 23:45:19.067181 kubelet[3199]: I0417 23:45:19.066997 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxcq\" (UniqueName: \"kubernetes.io/projected/fa600c94-1f27-4214-83d1-0c5f883bf096-kube-api-access-lzxcq\") pod \"coredns-66bc5c9577-znpmx\" (UID: \"fa600c94-1f27-4214-83d1-0c5f883bf096\") " pod="kube-system/coredns-66bc5c9577-znpmx" Apr 17 23:45:19.338213 containerd[1988]: time="2026-04-17T23:45:19.337744473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-znpmx,Uid:fa600c94-1f27-4214-83d1-0c5f883bf096,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:19.348241 containerd[1988]: time="2026-04-17T23:45:19.347846484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nw49s,Uid:be238acd-d209-4dcf-a1a9-31aca086f6bc,Namespace:kube-system,Attempt:0,}" Apr 17 23:45:21.321139 systemd-networkd[1821]: cilium_host: Link UP Apr 17 23:45:21.324374 systemd-networkd[1821]: cilium_net: Link UP Apr 17 23:45:21.325010 systemd-networkd[1821]: cilium_net: Gained carrier Apr 17 23:45:21.325748 systemd-networkd[1821]: cilium_host: Gained carrier Apr 17 23:45:21.326570 (udev-worker)[4093]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:21.327096 (udev-worker)[4127]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:21.472687 systemd-networkd[1821]: cilium_vxlan: Link UP Apr 17 23:45:21.472807 systemd-networkd[1821]: cilium_vxlan: Gained carrier Apr 17 23:45:22.006747 kernel: NET: Registered PF_ALG protocol family Apr 17 23:45:22.013962 systemd-networkd[1821]: cilium_host: Gained IPv6LL Apr 17 23:45:22.207784 systemd-networkd[1821]: cilium_net: Gained IPv6LL Apr 17 23:45:22.747854 (udev-worker)[4147]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:22.750877 systemd-networkd[1821]: lxc_health: Link UP Apr 17 23:45:22.756098 systemd-networkd[1821]: lxc_health: Gained carrier Apr 17 23:45:22.988340 systemd-networkd[1821]: lxc6da51a67196b: Link UP Apr 17 23:45:22.997752 kernel: eth0: renamed from tmp2c264 Apr 17 23:45:23.008304 systemd-networkd[1821]: lxc1293bb06fbca: Link UP Apr 17 23:45:23.016009 kernel: eth0: renamed from tmp62279 Apr 17 23:45:23.016576 systemd-networkd[1821]: lxc6da51a67196b: Gained carrier Apr 17 23:45:23.021075 systemd-networkd[1821]: lxc1293bb06fbca: Gained carrier Apr 17 23:45:23.026391 (udev-worker)[4142]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:45:23.101889 systemd-networkd[1821]: cilium_vxlan: Gained IPv6LL Apr 17 23:45:23.351808 kubelet[3199]: I0417 23:45:23.350746 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-brz64" podStartSLOduration=10.684725536 podStartE2EDuration="21.350721379s" podCreationTimestamp="2026-04-17 23:45:02 +0000 UTC" firstStartedPulling="2026-04-17 23:45:03.47900873 +0000 UTC m=+6.259153379" lastFinishedPulling="2026-04-17 23:45:14.145004569 +0000 UTC m=+16.925149222" observedRunningTime="2026-04-17 23:45:19.576644686 +0000 UTC m=+22.356789352" watchObservedRunningTime="2026-04-17 23:45:23.350721379 +0000 UTC m=+26.130866033" Apr 17 23:45:24.449853 systemd-networkd[1821]: lxc6da51a67196b: Gained IPv6LL Apr 17 23:45:24.509879 systemd-networkd[1821]: lxc1293bb06fbca: Gained IPv6LL Apr 17 23:45:24.702407 systemd-networkd[1821]: lxc_health: Gained IPv6LL Apr 17 23:45:26.968788 ntpd[1953]: Listen normally on 7 cilium_host 192.168.0.86:123 Apr 17 23:45:26.968885 ntpd[1953]: Listen normally on 8 cilium_net [fe80::7c07:aaff:fe0a:1768%4]:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 7 cilium_host 192.168.0.86:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 8 cilium_net [fe80::7c07:aaff:fe0a:1768%4]:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 9 cilium_host [fe80::304c:efff:fe29:4c85%5]:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 10 cilium_vxlan [fe80::c827:4dff:fe11:30a0%6]:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 11 lxc_health [fe80::3816:36ff:feae:d399%8]:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 12 lxc6da51a67196b [fe80::84c3:71ff:fedc:9e75%10]:123 Apr 17 23:45:26.969312 ntpd[1953]: 17 Apr 23:45:26 ntpd[1953]: Listen normally on 13 lxc1293bb06fbca [fe80::80f:36ff:fec5:4cdc%12]:123 Apr 17 23:45:26.968942 ntpd[1953]: Listen normally on 9 cilium_host [fe80::304c:efff:fe29:4c85%5]:123 Apr 17 23:45:26.968986 ntpd[1953]: Listen normally on 10 cilium_vxlan [fe80::c827:4dff:fe11:30a0%6]:123 Apr 17 23:45:26.969029 ntpd[1953]: Listen normally on 11 lxc_health [fe80::3816:36ff:feae:d399%8]:123 Apr 17 23:45:26.969068 ntpd[1953]: Listen normally on 12 lxc6da51a67196b [fe80::84c3:71ff:fedc:9e75%10]:123 Apr 17 23:45:26.969106 ntpd[1953]: Listen normally on 13 lxc1293bb06fbca [fe80::80f:36ff:fec5:4cdc%12]:123 Apr 17 23:45:27.661773 containerd[1988]: time="2026-04-17T23:45:27.658227106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:27.661773 containerd[1988]: time="2026-04-17T23:45:27.658330231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:27.661773 containerd[1988]: time="2026-04-17T23:45:27.658372321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:27.661773 containerd[1988]: time="2026-04-17T23:45:27.658548014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:27.715954 systemd[1]: Started cri-containerd-2c264c35e5ee59446331ce7cb1c6df162f87e8603fd69bb02b33c5125ae8097e.scope - libcontainer container 2c264c35e5ee59446331ce7cb1c6df162f87e8603fd69bb02b33c5125ae8097e. Apr 17 23:45:27.742984 containerd[1988]: time="2026-04-17T23:45:27.742565688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:45:27.742984 containerd[1988]: time="2026-04-17T23:45:27.742644262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:45:27.742984 containerd[1988]: time="2026-04-17T23:45:27.742668621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:27.742984 containerd[1988]: time="2026-04-17T23:45:27.742812313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:45:27.797958 systemd[1]: Started cri-containerd-62279ab6807ab779bd9c020100ee328f657d59db12047223204178a3a3446a70.scope - libcontainer container 62279ab6807ab779bd9c020100ee328f657d59db12047223204178a3a3446a70. Apr 17 23:45:27.841039 containerd[1988]: time="2026-04-17T23:45:27.840982880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-znpmx,Uid:fa600c94-1f27-4214-83d1-0c5f883bf096,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c264c35e5ee59446331ce7cb1c6df162f87e8603fd69bb02b33c5125ae8097e\"" Apr 17 23:45:27.854883 containerd[1988]: time="2026-04-17T23:45:27.854831684Z" level=info msg="CreateContainer within sandbox \"2c264c35e5ee59446331ce7cb1c6df162f87e8603fd69bb02b33c5125ae8097e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:45:27.899246 containerd[1988]: time="2026-04-17T23:45:27.899193851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nw49s,Uid:be238acd-d209-4dcf-a1a9-31aca086f6bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"62279ab6807ab779bd9c020100ee328f657d59db12047223204178a3a3446a70\"" Apr 17 23:45:27.905458 containerd[1988]: time="2026-04-17T23:45:27.905409815Z" level=info msg="CreateContainer within sandbox \"2c264c35e5ee59446331ce7cb1c6df162f87e8603fd69bb02b33c5125ae8097e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52c5b73054888efdfb29b701e6074e800ceb55c516af717c5ca2851b703ba73a\"" Apr 17 23:45:27.906404 containerd[1988]: time="2026-04-17T23:45:27.906366458Z" level=info msg="StartContainer for \"52c5b73054888efdfb29b701e6074e800ceb55c516af717c5ca2851b703ba73a\"" Apr 17 23:45:27.911188 containerd[1988]: time="2026-04-17T23:45:27.910979746Z" level=info msg="CreateContainer within sandbox \"62279ab6807ab779bd9c020100ee328f657d59db12047223204178a3a3446a70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:45:27.954094 containerd[1988]: time="2026-04-17T23:45:27.952442181Z" level=info msg="CreateContainer within sandbox \"62279ab6807ab779bd9c020100ee328f657d59db12047223204178a3a3446a70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0b3eb5b94280c405eda634bd96eda61e5868b4e02d14951be97f00ca7ded12e\"" Apr 17 23:45:27.954094 containerd[1988]: time="2026-04-17T23:45:27.953281206Z" level=info msg="StartContainer for \"a0b3eb5b94280c405eda634bd96eda61e5868b4e02d14951be97f00ca7ded12e\"" Apr 17 23:45:27.956841 systemd[1]: Started cri-containerd-52c5b73054888efdfb29b701e6074e800ceb55c516af717c5ca2851b703ba73a.scope - libcontainer container 52c5b73054888efdfb29b701e6074e800ceb55c516af717c5ca2851b703ba73a. Apr 17 23:45:28.002325 systemd[1]: Started cri-containerd-a0b3eb5b94280c405eda634bd96eda61e5868b4e02d14951be97f00ca7ded12e.scope - libcontainer container a0b3eb5b94280c405eda634bd96eda61e5868b4e02d14951be97f00ca7ded12e. Apr 17 23:45:28.023617 containerd[1988]: time="2026-04-17T23:45:28.023556920Z" level=info msg="StartContainer for \"52c5b73054888efdfb29b701e6074e800ceb55c516af717c5ca2851b703ba73a\" returns successfully" Apr 17 23:45:28.043641 containerd[1988]: time="2026-04-17T23:45:28.043600460Z" level=info msg="StartContainer for \"a0b3eb5b94280c405eda634bd96eda61e5868b4e02d14951be97f00ca7ded12e\" returns successfully" Apr 17 23:45:28.583780 kubelet[3199]: I0417 23:45:28.583707 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nw49s" podStartSLOduration=25.583672743 podStartE2EDuration="25.583672743s" podCreationTimestamp="2026-04-17 23:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:28.583123621 +0000 UTC m=+31.363268277" watchObservedRunningTime="2026-04-17 23:45:28.583672743 +0000 UTC m=+31.363817397" Apr 17 23:45:28.677524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223728907.mount: Deactivated successfully. Apr 17 23:45:39.576238 systemd[1]: Started sshd@7-172.31.19.162:22-20.229.252.112:41660.service - OpenSSH per-connection server daemon (20.229.252.112:41660). Apr 17 23:45:40.628601 sshd[4672]: Accepted publickey for core from 20.229.252.112 port 41660 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:40.630957 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:40.639443 systemd-logind[1967]: New session 8 of user core. Apr 17 23:45:40.652023 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:45:42.046112 sshd[4672]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:42.049944 systemd[1]: sshd@7-172.31.19.162:22-20.229.252.112:41660.service: Deactivated successfully. Apr 17 23:45:42.053087 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:45:42.056367 systemd-logind[1967]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:45:42.058470 systemd-logind[1967]: Removed session 8. Apr 17 23:45:47.229218 systemd[1]: Started sshd@8-172.31.19.162:22-20.229.252.112:42506.service - OpenSSH per-connection server daemon (20.229.252.112:42506). Apr 17 23:45:48.246739 sshd[4686]: Accepted publickey for core from 20.229.252.112 port 42506 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:48.247795 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:48.253174 systemd-logind[1967]: New session 9 of user core. Apr 17 23:45:48.260964 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:45:49.032166 sshd[4686]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:49.038679 systemd-logind[1967]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:45:49.039767 systemd[1]: sshd@8-172.31.19.162:22-20.229.252.112:42506.service: Deactivated successfully. Apr 17 23:45:49.043800 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:45:49.045051 systemd-logind[1967]: Removed session 9. Apr 17 23:45:54.210513 systemd[1]: Started sshd@9-172.31.19.162:22-20.229.252.112:42512.service - OpenSSH per-connection server daemon (20.229.252.112:42512). Apr 17 23:45:55.224182 sshd[4701]: Accepted publickey for core from 20.229.252.112 port 42512 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:55.225843 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:55.231134 systemd-logind[1967]: New session 10 of user core. Apr 17 23:45:55.238955 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:45:56.001565 sshd[4701]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:56.006120 systemd[1]: sshd@9-172.31.19.162:22-20.229.252.112:42512.service: Deactivated successfully. Apr 17 23:45:56.009032 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:45:56.009868 systemd-logind[1967]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:45:56.011283 systemd-logind[1967]: Removed session 10. Apr 17 23:45:56.165669 systemd[1]: Started sshd@10-172.31.19.162:22-20.229.252.112:57100.service - OpenSSH per-connection server daemon (20.229.252.112:57100). Apr 17 23:45:57.151680 sshd[4715]: Accepted publickey for core from 20.229.252.112 port 57100 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:57.152542 sshd[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:57.158822 systemd-logind[1967]: New session 11 of user core. Apr 17 23:45:57.163950 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:45:57.981514 sshd[4715]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:57.984779 systemd[1]: sshd@10-172.31.19.162:22-20.229.252.112:57100.service: Deactivated successfully. Apr 17 23:45:57.986974 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:45:57.989276 systemd-logind[1967]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:45:57.990596 systemd-logind[1967]: Removed session 11. Apr 17 23:45:58.153153 systemd[1]: Started sshd@11-172.31.19.162:22-20.229.252.112:57114.service - OpenSSH per-connection server daemon (20.229.252.112:57114). Apr 17 23:45:59.133438 sshd[4728]: Accepted publickey for core from 20.229.252.112 port 57114 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:45:59.136255 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:45:59.141935 systemd-logind[1967]: New session 12 of user core. Apr 17 23:45:59.150971 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:45:59.882292 sshd[4728]: pam_unix(sshd:session): session closed for user core Apr 17 23:45:59.886084 systemd[1]: sshd@11-172.31.19.162:22-20.229.252.112:57114.service: Deactivated successfully. Apr 17 23:45:59.888771 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:45:59.889785 systemd-logind[1967]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:45:59.891810 systemd-logind[1967]: Removed session 12. Apr 17 23:46:05.069106 systemd[1]: Started sshd@12-172.31.19.162:22-20.229.252.112:41398.service - OpenSSH per-connection server daemon (20.229.252.112:41398). Apr 17 23:46:06.083343 sshd[4743]: Accepted publickey for core from 20.229.252.112 port 41398 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:06.085090 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:06.090508 systemd-logind[1967]: New session 13 of user core. Apr 17 23:46:06.095951 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:46:06.855757 sshd[4743]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:06.860076 systemd-logind[1967]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:46:06.861116 systemd[1]: sshd@12-172.31.19.162:22-20.229.252.112:41398.service: Deactivated successfully. Apr 17 23:46:06.863609 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:46:06.864760 systemd-logind[1967]: Removed session 13. Apr 17 23:46:07.021054 systemd[1]: Started sshd@13-172.31.19.162:22-20.229.252.112:41410.service - OpenSSH per-connection server daemon (20.229.252.112:41410). Apr 17 23:46:07.990598 sshd[4756]: Accepted publickey for core from 20.229.252.112 port 41410 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:07.992432 sshd[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:07.997662 systemd-logind[1967]: New session 14 of user core. Apr 17 23:46:08.001971 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:46:09.284868 sshd[4756]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:09.292398 systemd[1]: sshd@13-172.31.19.162:22-20.229.252.112:41410.service: Deactivated successfully. Apr 17 23:46:09.295488 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:46:09.297206 systemd-logind[1967]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:46:09.299473 systemd-logind[1967]: Removed session 14. Apr 17 23:46:09.471489 systemd[1]: Started sshd@14-172.31.19.162:22-20.229.252.112:41418.service - OpenSSH per-connection server daemon (20.229.252.112:41418). Apr 17 23:46:10.499738 sshd[4767]: Accepted publickey for core from 20.229.252.112 port 41418 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:10.501746 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:10.507721 systemd-logind[1967]: New session 15 of user core. Apr 17 23:46:10.517983 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:46:11.960596 sshd[4767]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:11.965235 systemd[1]: sshd@14-172.31.19.162:22-20.229.252.112:41418.service: Deactivated successfully. Apr 17 23:46:11.968332 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:46:11.969321 systemd-logind[1967]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:46:11.970519 systemd-logind[1967]: Removed session 15. Apr 17 23:46:12.122917 systemd[1]: Started sshd@15-172.31.19.162:22-20.229.252.112:41432.service - OpenSSH per-connection server daemon (20.229.252.112:41432). Apr 17 23:46:13.111382 sshd[4783]: Accepted publickey for core from 20.229.252.112 port 41432 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:13.113151 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:13.118779 systemd-logind[1967]: New session 16 of user core. Apr 17 23:46:13.127956 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:46:14.078651 sshd[4783]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:14.083931 systemd-logind[1967]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:46:14.085079 systemd[1]: sshd@15-172.31.19.162:22-20.229.252.112:41432.service: Deactivated successfully. Apr 17 23:46:14.087916 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:46:14.089214 systemd-logind[1967]: Removed session 16. Apr 17 23:46:14.251042 systemd[1]: Started sshd@16-172.31.19.162:22-20.229.252.112:41446.service - OpenSSH per-connection server daemon (20.229.252.112:41446). Apr 17 23:46:15.223270 sshd[4796]: Accepted publickey for core from 20.229.252.112 port 41446 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:15.225079 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:15.230972 systemd-logind[1967]: New session 17 of user core. Apr 17 23:46:15.236955 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:46:15.973160 sshd[4796]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:15.976416 systemd[1]: sshd@16-172.31.19.162:22-20.229.252.112:41446.service: Deactivated successfully. Apr 17 23:46:15.979136 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:46:15.981288 systemd-logind[1967]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:46:15.982573 systemd-logind[1967]: Removed session 17. Apr 17 23:46:21.171138 systemd[1]: Started sshd@17-172.31.19.162:22-20.229.252.112:54162.service - OpenSSH per-connection server daemon (20.229.252.112:54162). Apr 17 23:46:22.197384 sshd[4811]: Accepted publickey for core from 20.229.252.112 port 54162 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:22.200376 sshd[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:22.207559 systemd-logind[1967]: New session 18 of user core. Apr 17 23:46:22.214344 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:46:22.978411 sshd[4811]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:22.983337 systemd-logind[1967]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:46:22.984269 systemd[1]: sshd@17-172.31.19.162:22-20.229.252.112:54162.service: Deactivated successfully. Apr 17 23:46:22.987464 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:46:22.988866 systemd-logind[1967]: Removed session 18. Apr 17 23:46:28.143097 systemd[1]: Started sshd@18-172.31.19.162:22-20.229.252.112:52940.service - OpenSSH per-connection server daemon (20.229.252.112:52940). Apr 17 23:46:29.116831 sshd[4826]: Accepted publickey for core from 20.229.252.112 port 52940 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:29.117576 sshd[4826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:29.125861 systemd-logind[1967]: New session 19 of user core. Apr 17 23:46:29.129906 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:46:29.856473 sshd[4826]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:29.860524 systemd[1]: sshd@18-172.31.19.162:22-20.229.252.112:52940.service: Deactivated successfully. Apr 17 23:46:29.862814 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:46:29.863655 systemd-logind[1967]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:46:29.865195 systemd-logind[1967]: Removed session 19. Apr 17 23:46:30.029141 systemd[1]: Started sshd@19-172.31.19.162:22-20.229.252.112:52956.service - OpenSSH per-connection server daemon (20.229.252.112:52956). Apr 17 23:46:31.027640 sshd[4839]: Accepted publickey for core from 20.229.252.112 port 52956 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:31.028466 sshd[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:31.033681 systemd-logind[1967]: New session 20 of user core. Apr 17 23:46:31.037022 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:46:32.871862 kubelet[3199]: I0417 23:46:32.869445 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-znpmx" podStartSLOduration=89.869421798 podStartE2EDuration="1m29.869421798s" podCreationTimestamp="2026-04-17 23:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:45:28.621841204 +0000 UTC m=+31.401985858" watchObservedRunningTime="2026-04-17 23:46:32.869421798 +0000 UTC m=+95.649566454" Apr 17 23:46:33.113124 containerd[1988]: time="2026-04-17T23:46:33.110650862Z" level=info msg="StopContainer for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" with timeout 30 (s)" Apr 17 23:46:33.113124 containerd[1988]: time="2026-04-17T23:46:33.112995782Z" level=info msg="Stop container \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" with signal terminated" Apr 17 23:46:33.157733 containerd[1988]: time="2026-04-17T23:46:33.142776722Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:46:33.173999 containerd[1988]: time="2026-04-17T23:46:33.172478445Z" level=info msg="StopContainer for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" with timeout 2 (s)" Apr 17 23:46:33.174287 containerd[1988]: time="2026-04-17T23:46:33.174246436Z" level=info msg="Stop container \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" with signal terminated" Apr 17 23:46:33.197957 systemd[1]: cri-containerd-23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697.scope: Deactivated successfully. Apr 17 23:46:33.205035 systemd-networkd[1821]: lxc_health: Link DOWN Apr 17 23:46:33.206770 systemd-networkd[1821]: lxc_health: Lost carrier Apr 17 23:46:33.253079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697-rootfs.mount: Deactivated successfully. Apr 17 23:46:33.258566 systemd[1]: cri-containerd-ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58.scope: Deactivated successfully. Apr 17 23:46:33.260502 systemd[1]: cri-containerd-ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58.scope: Consumed 8.443s CPU time. Apr 17 23:46:33.277652 containerd[1988]: time="2026-04-17T23:46:33.277582612Z" level=info msg="shim disconnected" id=23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697 namespace=k8s.io Apr 17 23:46:33.280717 containerd[1988]: time="2026-04-17T23:46:33.277916446Z" level=warning msg="cleaning up after shim disconnected" id=23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697 namespace=k8s.io Apr 17 23:46:33.280717 containerd[1988]: time="2026-04-17T23:46:33.277939213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:33.308593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58-rootfs.mount: Deactivated successfully. Apr 17 23:46:33.315984 containerd[1988]: time="2026-04-17T23:46:33.315941113Z" level=info msg="StopContainer for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" returns successfully" Apr 17 23:46:33.316929 containerd[1988]: time="2026-04-17T23:46:33.316835596Z" level=info msg="shim disconnected" id=ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58 namespace=k8s.io Apr 17 23:46:33.316929 containerd[1988]: time="2026-04-17T23:46:33.316907079Z" level=warning msg="cleaning up after shim disconnected" id=ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58 namespace=k8s.io Apr 17 23:46:33.316929 containerd[1988]: time="2026-04-17T23:46:33.316921560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:33.319253 containerd[1988]: time="2026-04-17T23:46:33.318236988Z" level=info msg="StopPodSandbox for \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\"" Apr 17 23:46:33.319253 containerd[1988]: time="2026-04-17T23:46:33.318282000Z" level=info msg="Container to stop \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:46:33.323244 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d-shm.mount: Deactivated successfully. Apr 17 23:46:33.331640 systemd[1]: cri-containerd-2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d.scope: Deactivated successfully. Apr 17 23:46:33.354847 containerd[1988]: time="2026-04-17T23:46:33.354788492Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:46:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:46:33.360836 containerd[1988]: time="2026-04-17T23:46:33.360694045Z" level=info msg="StopContainer for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" returns successfully" Apr 17 23:46:33.364023 containerd[1988]: time="2026-04-17T23:46:33.361316699Z" level=info msg="StopPodSandbox for \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\"" Apr 17 23:46:33.364023 containerd[1988]: time="2026-04-17T23:46:33.361363277Z" level=info msg="Container to stop \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:46:33.364023 containerd[1988]: time="2026-04-17T23:46:33.361392922Z" level=info msg="Container to stop \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:46:33.364023 containerd[1988]: time="2026-04-17T23:46:33.361411417Z" level=info msg="Container to stop \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:46:33.364023 containerd[1988]: time="2026-04-17T23:46:33.361427561Z" level=info msg="Container to stop \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:46:33.364023 containerd[1988]: time="2026-04-17T23:46:33.361442038Z" level=info msg="Container to stop \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:46:33.364398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0-shm.mount: Deactivated successfully. Apr 17 23:46:33.373204 systemd[1]: cri-containerd-7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0.scope: Deactivated successfully. Apr 17 23:46:33.392170 containerd[1988]: time="2026-04-17T23:46:33.391102369Z" level=info msg="shim disconnected" id=2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d namespace=k8s.io Apr 17 23:46:33.392170 containerd[1988]: time="2026-04-17T23:46:33.391226610Z" level=warning msg="cleaning up after shim disconnected" id=2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d namespace=k8s.io Apr 17 23:46:33.392170 containerd[1988]: time="2026-04-17T23:46:33.391294016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:33.427406 containerd[1988]: time="2026-04-17T23:46:33.426162577Z" level=info msg="shim disconnected" id=7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0 namespace=k8s.io Apr 17 23:46:33.427406 containerd[1988]: time="2026-04-17T23:46:33.426243500Z" level=warning msg="cleaning up after shim disconnected" id=7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0 namespace=k8s.io Apr 17 23:46:33.427406 containerd[1988]: time="2026-04-17T23:46:33.426255790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:33.439548 containerd[1988]: time="2026-04-17T23:46:33.439496784Z" level=info msg="TearDown network for sandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" successfully" Apr 17 23:46:33.439767 containerd[1988]: time="2026-04-17T23:46:33.439743006Z" level=info msg="StopPodSandbox for \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" returns successfully" Apr 17 23:46:33.450519 containerd[1988]: time="2026-04-17T23:46:33.449961313Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:46:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:46:33.452285 containerd[1988]: time="2026-04-17T23:46:33.451559541Z" level=info msg="TearDown network for sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" successfully" Apr 17 23:46:33.452285 containerd[1988]: time="2026-04-17T23:46:33.451595158Z" level=info msg="StopPodSandbox for \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" returns successfully" Apr 17 23:46:33.520721 kubelet[3199]: I0417 23:46:33.519649 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-config-path\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.520721 kubelet[3199]: I0417 23:46:33.519744 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqwfw\" (UniqueName: \"kubernetes.io/projected/410ee967-0222-4877-b051-a9f67fe1a8e0-kube-api-access-sqwfw\") pod \"410ee967-0222-4877-b051-a9f67fe1a8e0\" (UID: \"410ee967-0222-4877-b051-a9f67fe1a8e0\") " Apr 17 23:46:33.520721 kubelet[3199]: I0417 23:46:33.519774 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-kernel\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.520721 kubelet[3199]: I0417 23:46:33.519800 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/082682f2-c210-4d89-813d-1d82ed4c11a0-clustermesh-secrets\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.520721 kubelet[3199]: I0417 23:46:33.519827 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-hostproc\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.520721 kubelet[3199]: I0417 23:46:33.519846 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-net\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521144 kubelet[3199]: I0417 23:46:33.519874 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-cgroup\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521144 kubelet[3199]: I0417 23:46:33.519894 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-bpf-maps\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521144 kubelet[3199]: I0417 23:46:33.519916 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gq9r\" (UniqueName: \"kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-kube-api-access-4gq9r\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521144 kubelet[3199]: I0417 23:46:33.519936 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-etc-cni-netd\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521144 kubelet[3199]: I0417 23:46:33.519957 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-lib-modules\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521144 kubelet[3199]: I0417 23:46:33.519977 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-xtables-lock\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521422 kubelet[3199]: I0417 23:46:33.520002 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-run\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521422 kubelet[3199]: I0417 23:46:33.520029 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-hubble-tls\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521422 kubelet[3199]: I0417 23:46:33.520225 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cni-path\") pod \"082682f2-c210-4d89-813d-1d82ed4c11a0\" (UID: \"082682f2-c210-4d89-813d-1d82ed4c11a0\") " Apr 17 23:46:33.521422 kubelet[3199]: I0417 23:46:33.520254 3199 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/410ee967-0222-4877-b051-a9f67fe1a8e0-cilium-config-path\") pod \"410ee967-0222-4877-b051-a9f67fe1a8e0\" (UID: \"410ee967-0222-4877-b051-a9f67fe1a8e0\") " Apr 17 23:46:33.542873 kubelet[3199]: I0417 23:46:33.536556 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.546974 kubelet[3199]: I0417 23:46:33.546622 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410ee967-0222-4877-b051-a9f67fe1a8e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "410ee967-0222-4877-b051-a9f67fe1a8e0" (UID: "410ee967-0222-4877-b051-a9f67fe1a8e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:46:33.546974 kubelet[3199]: I0417 23:46:33.546649 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:46:33.546974 kubelet[3199]: I0417 23:46:33.546777 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/082682f2-c210-4d89-813d-1d82ed4c11a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:46:33.546974 kubelet[3199]: I0417 23:46:33.546818 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.546974 kubelet[3199]: I0417 23:46:33.546839 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.547304 kubelet[3199]: I0417 23:46:33.546858 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.547304 kubelet[3199]: I0417 23:46:33.546877 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.548188 kubelet[3199]: I0417 23:46:33.547946 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.548188 kubelet[3199]: I0417 23:46:33.547992 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.548188 kubelet[3199]: I0417 23:46:33.548017 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.548188 kubelet[3199]: I0417 23:46:33.548134 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.548606 kubelet[3199]: I0417 23:46:33.548448 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 17 23:46:33.560542 kubelet[3199]: I0417 23:46:33.560472 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-kube-api-access-4gq9r" (OuterVolumeSpecName: "kube-api-access-4gq9r") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "kube-api-access-4gq9r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:46:33.560688 kubelet[3199]: I0417 23:46:33.560555 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "082682f2-c210-4d89-813d-1d82ed4c11a0" (UID: "082682f2-c210-4d89-813d-1d82ed4c11a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:46:33.560688 kubelet[3199]: I0417 23:46:33.560576 3199 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ee967-0222-4877-b051-a9f67fe1a8e0-kube-api-access-sqwfw" (OuterVolumeSpecName: "kube-api-access-sqwfw") pod "410ee967-0222-4877-b051-a9f67fe1a8e0" (UID: "410ee967-0222-4877-b051-a9f67fe1a8e0"). InnerVolumeSpecName "kube-api-access-sqwfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:46:33.621004 kubelet[3199]: I0417 23:46:33.620957 3199 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-cgroup\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621004 kubelet[3199]: I0417 23:46:33.620994 3199 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-bpf-maps\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621004 kubelet[3199]: I0417 23:46:33.621007 3199 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gq9r\" (UniqueName: \"kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-kube-api-access-4gq9r\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621004 kubelet[3199]: I0417 23:46:33.621020 3199 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-etc-cni-netd\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621031 3199 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-lib-modules\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621042 3199 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-xtables-lock\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621071 3199 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-run\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621081 3199 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/082682f2-c210-4d89-813d-1d82ed4c11a0-hubble-tls\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621091 3199 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-cni-path\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621102 3199 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/410ee967-0222-4877-b051-a9f67fe1a8e0-cilium-config-path\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621112 3199 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/082682f2-c210-4d89-813d-1d82ed4c11a0-cilium-config-path\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621280 kubelet[3199]: I0417 23:46:33.621123 3199 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqwfw\" (UniqueName: \"kubernetes.io/projected/410ee967-0222-4877-b051-a9f67fe1a8e0-kube-api-access-sqwfw\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621480 kubelet[3199]: I0417 23:46:33.621135 3199 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-kernel\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621480 kubelet[3199]: I0417 23:46:33.621147 3199 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/082682f2-c210-4d89-813d-1d82ed4c11a0-clustermesh-secrets\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621480 kubelet[3199]: I0417 23:46:33.621157 3199 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-hostproc\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.621480 kubelet[3199]: I0417 23:46:33.621168 3199 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/082682f2-c210-4d89-813d-1d82ed4c11a0-host-proc-sys-net\") on node \"ip-172-31-19-162\" DevicePath \"\"" Apr 17 23:46:33.713095 kubelet[3199]: I0417 23:46:33.712319 3199 scope.go:117] "RemoveContainer" containerID="ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58" Apr 17 23:46:33.725101 systemd[1]: Removed slice kubepods-burstable-pod082682f2_c210_4d89_813d_1d82ed4c11a0.slice - libcontainer container kubepods-burstable-pod082682f2_c210_4d89_813d_1d82ed4c11a0.slice. Apr 17 23:46:33.725536 systemd[1]: kubepods-burstable-pod082682f2_c210_4d89_813d_1d82ed4c11a0.slice: Consumed 8.541s CPU time. Apr 17 23:46:33.739290 containerd[1988]: time="2026-04-17T23:46:33.738890914Z" level=info msg="RemoveContainer for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\"" Apr 17 23:46:33.745769 systemd[1]: Removed slice kubepods-besteffort-pod410ee967_0222_4877_b051_a9f67fe1a8e0.slice - libcontainer container kubepods-besteffort-pod410ee967_0222_4877_b051_a9f67fe1a8e0.slice. Apr 17 23:46:33.752023 containerd[1988]: time="2026-04-17T23:46:33.751965949Z" level=info msg="RemoveContainer for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" returns successfully" Apr 17 23:46:33.753484 kubelet[3199]: I0417 23:46:33.752993 3199 scope.go:117] "RemoveContainer" containerID="b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8" Apr 17 23:46:33.756623 containerd[1988]: time="2026-04-17T23:46:33.756562999Z" level=info msg="RemoveContainer for \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\"" Apr 17 23:46:33.762314 containerd[1988]: time="2026-04-17T23:46:33.761946448Z" level=info msg="RemoveContainer for \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\" returns successfully" Apr 17 23:46:33.763275 kubelet[3199]: I0417 23:46:33.762647 3199 scope.go:117] "RemoveContainer" containerID="0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c" Apr 17 23:46:33.765042 containerd[1988]: time="2026-04-17T23:46:33.765007985Z" level=info msg="RemoveContainer for \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\"" Apr 17 23:46:33.771625 containerd[1988]: time="2026-04-17T23:46:33.771574164Z" level=info msg="RemoveContainer for \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\" returns successfully" Apr 17 23:46:33.771938 kubelet[3199]: I0417 23:46:33.771906 3199 scope.go:117] "RemoveContainer" containerID="0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650" Apr 17 23:46:33.774967 containerd[1988]: time="2026-04-17T23:46:33.774870172Z" level=info msg="RemoveContainer for \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\"" Apr 17 23:46:33.780714 containerd[1988]: time="2026-04-17T23:46:33.780647623Z" level=info msg="RemoveContainer for \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\" returns successfully" Apr 17 23:46:33.780993 kubelet[3199]: I0417 23:46:33.780969 3199 scope.go:117] "RemoveContainer" containerID="7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7" Apr 17 23:46:33.782152 containerd[1988]: time="2026-04-17T23:46:33.782112846Z" level=info msg="RemoveContainer for \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\"" Apr 17 23:46:33.787180 containerd[1988]: time="2026-04-17T23:46:33.787144685Z" level=info msg="RemoveContainer for \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\" returns successfully" Apr 17 23:46:33.787442 kubelet[3199]: I0417 23:46:33.787414 3199 scope.go:117] "RemoveContainer" containerID="ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58" Apr 17 23:46:33.797214 containerd[1988]: time="2026-04-17T23:46:33.788868665Z" level=error msg="ContainerStatus for \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\": not found" Apr 17 23:46:33.801118 kubelet[3199]: E0417 23:46:33.801058 3199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\": not found" containerID="ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58" Apr 17 23:46:33.801657 kubelet[3199]: I0417 23:46:33.801401 3199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58"} err="failed to get container status \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea3dd394a947bc4f62e80874ea39b4b2ebdbeec0dca004b1f12134a0e1672e58\": not found" Apr 17 23:46:33.801657 kubelet[3199]: I0417 23:46:33.801486 3199 scope.go:117] "RemoveContainer" containerID="b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8" Apr 17 23:46:33.806290 containerd[1988]: time="2026-04-17T23:46:33.806233656Z" level=error msg="ContainerStatus for \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\": not found" Apr 17 23:46:33.806497 kubelet[3199]: E0417 23:46:33.806462 3199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\": not found" containerID="b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8" Apr 17 23:46:33.806595 kubelet[3199]: I0417 23:46:33.806507 3199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8"} err="failed to get container status \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b1ccfec0c4ca55d7d1ab2a1867d132fc4cf56c5edd32d69d78a06b88f20a71c8\": not found" Apr 17 23:46:33.806595 kubelet[3199]: I0417 23:46:33.806539 3199 scope.go:117] "RemoveContainer" containerID="0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c" Apr 17 23:46:33.806849 containerd[1988]: time="2026-04-17T23:46:33.806806691Z" level=error msg="ContainerStatus for \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\": not found" Apr 17 23:46:33.807062 kubelet[3199]: E0417 23:46:33.807043 3199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\": not found" containerID="0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c" Apr 17 23:46:33.807138 kubelet[3199]: I0417 23:46:33.807072 3199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c"} err="failed to get container status \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cb93723c14ac2f266ef06b9ac5f4084528e4cc601d29e45173d30c9e7c2fe4c\": not found" Apr 17 23:46:33.807138 kubelet[3199]: I0417 23:46:33.807097 3199 scope.go:117] "RemoveContainer" containerID="0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650" Apr 17 23:46:33.807327 containerd[1988]: time="2026-04-17T23:46:33.807281548Z" level=error msg="ContainerStatus for \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\": not found" Apr 17 23:46:33.807451 kubelet[3199]: E0417 23:46:33.807427 3199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\": not found" containerID="0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650" Apr 17 23:46:33.807524 kubelet[3199]: I0417 23:46:33.807453 3199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650"} err="failed to get container status \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a61746baf742d3cb2f2a231075bf4899aea79e30569af2a6c03217abbcb8650\": not found" Apr 17 23:46:33.807524 kubelet[3199]: I0417 23:46:33.807473 3199 scope.go:117] "RemoveContainer" containerID="7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7" Apr 17 23:46:33.807709 containerd[1988]: time="2026-04-17T23:46:33.807663611Z" level=error msg="ContainerStatus for \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\": not found" Apr 17 23:46:33.807810 kubelet[3199]: E0417 23:46:33.807784 3199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\": not found" containerID="7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7" Apr 17 23:46:33.807877 kubelet[3199]: I0417 23:46:33.807814 3199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7"} err="failed to get container status \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a97223adc8cad5b1a81449b6a647e94fd129e3ac2ebe88e418587fb9b9ca3e7\": not found" Apr 17 23:46:33.807877 kubelet[3199]: I0417 23:46:33.807834 3199 scope.go:117] "RemoveContainer" containerID="23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697" Apr 17 23:46:33.809104 containerd[1988]: time="2026-04-17T23:46:33.809075820Z" level=info msg="RemoveContainer for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\"" Apr 17 23:46:33.841852 containerd[1988]: time="2026-04-17T23:46:33.841727487Z" level=info msg="RemoveContainer for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" returns successfully" Apr 17 23:46:33.842127 kubelet[3199]: I0417 23:46:33.842081 3199 scope.go:117] "RemoveContainer" containerID="23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697" Apr 17 23:46:33.842754 containerd[1988]: time="2026-04-17T23:46:33.842674850Z" level=error msg="ContainerStatus for \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\": not found" Apr 17 23:46:33.842936 kubelet[3199]: E0417 23:46:33.842906 3199 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\": not found" containerID="23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697" Apr 17 23:46:33.843030 kubelet[3199]: I0417 23:46:33.842953 3199 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697"} err="failed to get container status \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\": rpc error: code = NotFound desc = an error occurred when try to find container \"23e5d2dd2ebca6f4a01ee73b70f86c3c1ad91db47e6c17a492826f4a4774b697\": not found" Apr 17 23:46:34.016690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d-rootfs.mount: Deactivated successfully. Apr 17 23:46:34.016831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0-rootfs.mount: Deactivated successfully. Apr 17 23:46:34.016920 systemd[1]: var-lib-kubelet-pods-410ee967\x2d0222\x2d4877\x2db051\x2da9f67fe1a8e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqwfw.mount: Deactivated successfully. Apr 17 23:46:34.017012 systemd[1]: var-lib-kubelet-pods-082682f2\x2dc210\x2d4d89\x2d813d\x2d1d82ed4c11a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gq9r.mount: Deactivated successfully. Apr 17 23:46:34.017094 systemd[1]: var-lib-kubelet-pods-082682f2\x2dc210\x2d4d89\x2d813d\x2d1d82ed4c11a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 17 23:46:34.017156 systemd[1]: var-lib-kubelet-pods-082682f2\x2dc210\x2d4d89\x2d813d\x2d1d82ed4c11a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 17 23:46:34.969994 sshd[4839]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:34.975019 systemd-logind[1967]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:46:34.976216 systemd[1]: sshd@19-172.31.19.162:22-20.229.252.112:52956.service: Deactivated successfully. Apr 17 23:46:34.979247 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:46:34.980801 systemd-logind[1967]: Removed session 20. Apr 17 23:46:35.145054 systemd[1]: Started sshd@20-172.31.19.162:22-20.229.252.112:51412.service - OpenSSH per-connection server daemon (20.229.252.112:51412). Apr 17 23:46:35.389802 kubelet[3199]: I0417 23:46:35.389760 3199 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="082682f2-c210-4d89-813d-1d82ed4c11a0" path="/var/lib/kubelet/pods/082682f2-c210-4d89-813d-1d82ed4c11a0/volumes" Apr 17 23:46:35.390573 kubelet[3199]: I0417 23:46:35.390535 3199 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410ee967-0222-4877-b051-a9f67fe1a8e0" path="/var/lib/kubelet/pods/410ee967-0222-4877-b051-a9f67fe1a8e0/volumes" Apr 17 23:46:35.968733 ntpd[1953]: Deleting interface #11 lxc_health, fe80::3816:36ff:feae:d399%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Apr 17 23:46:35.969575 ntpd[1953]: 17 Apr 23:46:35 ntpd[1953]: Deleting interface #11 lxc_health, fe80::3816:36ff:feae:d399%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Apr 17 23:46:36.136563 sshd[5007]: Accepted publickey for core from 20.229.252.112 port 51412 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:36.137304 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:36.143592 systemd-logind[1967]: New session 21 of user core. Apr 17 23:46:36.146899 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:46:37.258833 systemd[1]: Created slice kubepods-burstable-podc6037225_c99e_4964_ba5d_9c8ffa1a264d.slice - libcontainer container kubepods-burstable-podc6037225_c99e_4964_ba5d_9c8ffa1a264d.slice. Apr 17 23:46:37.332435 sshd[5007]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:37.337895 systemd[1]: sshd@20-172.31.19.162:22-20.229.252.112:51412.service: Deactivated successfully. Apr 17 23:46:37.340615 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:46:37.341608 systemd-logind[1967]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:46:37.343110 systemd-logind[1967]: Removed session 21. Apr 17 23:46:37.354337 kubelet[3199]: I0417 23:46:37.354288 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6037225-c99e-4964-ba5d-9c8ffa1a264d-cilium-config-path\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354337 kubelet[3199]: I0417 23:46:37.354342 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-bpf-maps\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354879 kubelet[3199]: I0417 23:46:37.354372 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6037225-c99e-4964-ba5d-9c8ffa1a264d-hubble-tls\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354879 kubelet[3199]: I0417 23:46:37.354395 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-cni-path\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354879 kubelet[3199]: I0417 23:46:37.354423 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-xtables-lock\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354879 kubelet[3199]: I0417 23:46:37.354444 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6037225-c99e-4964-ba5d-9c8ffa1a264d-cilium-ipsec-secrets\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354879 kubelet[3199]: I0417 23:46:37.354470 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-hostproc\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.354879 kubelet[3199]: I0417 23:46:37.354496 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6037225-c99e-4964-ba5d-9c8ffa1a264d-clustermesh-secrets\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355088 kubelet[3199]: I0417 23:46:37.354539 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-host-proc-sys-kernel\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355088 kubelet[3199]: I0417 23:46:37.354561 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-cilium-run\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355088 kubelet[3199]: I0417 23:46:37.354586 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-cilium-cgroup\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355088 kubelet[3199]: I0417 23:46:37.354613 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w772t\" (UniqueName: \"kubernetes.io/projected/c6037225-c99e-4964-ba5d-9c8ffa1a264d-kube-api-access-w772t\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355088 kubelet[3199]: I0417 23:46:37.354641 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-lib-modules\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355230 kubelet[3199]: I0417 23:46:37.354661 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-host-proc-sys-net\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.355230 kubelet[3199]: I0417 23:46:37.354687 3199 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6037225-c99e-4964-ba5d-9c8ffa1a264d-etc-cni-netd\") pod \"cilium-g9wtq\" (UID: \"c6037225-c99e-4964-ba5d-9c8ffa1a264d\") " pod="kube-system/cilium-g9wtq" Apr 17 23:46:37.506848 kubelet[3199]: E0417 23:46:37.506805 3199 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 17 23:46:37.519078 systemd[1]: Started sshd@21-172.31.19.162:22-20.229.252.112:51414.service - OpenSSH per-connection server daemon (20.229.252.112:51414). Apr 17 23:46:37.574545 containerd[1988]: time="2026-04-17T23:46:37.574494149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9wtq,Uid:c6037225-c99e-4964-ba5d-9c8ffa1a264d,Namespace:kube-system,Attempt:0,}" Apr 17 23:46:37.615431 containerd[1988]: time="2026-04-17T23:46:37.615119229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:46:37.615431 containerd[1988]: time="2026-04-17T23:46:37.615172308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:46:37.615431 containerd[1988]: time="2026-04-17T23:46:37.615197592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:37.615431 containerd[1988]: time="2026-04-17T23:46:37.615308544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:46:37.647953 systemd[1]: Started cri-containerd-33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7.scope - libcontainer container 33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7. Apr 17 23:46:37.679073 containerd[1988]: time="2026-04-17T23:46:37.679013024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g9wtq,Uid:c6037225-c99e-4964-ba5d-9c8ffa1a264d,Namespace:kube-system,Attempt:0,} returns sandbox id \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\"" Apr 17 23:46:37.696750 containerd[1988]: time="2026-04-17T23:46:37.696562775Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 17 23:46:37.726301 containerd[1988]: time="2026-04-17T23:46:37.726193089Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400\"" Apr 17 23:46:37.727175 containerd[1988]: time="2026-04-17T23:46:37.727131910Z" level=info msg="StartContainer for \"398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400\"" Apr 17 23:46:37.754923 systemd[1]: Started cri-containerd-398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400.scope - libcontainer container 398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400. Apr 17 23:46:37.786498 containerd[1988]: time="2026-04-17T23:46:37.786371792Z" level=info msg="StartContainer for \"398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400\" returns successfully" Apr 17 23:46:37.806798 systemd[1]: cri-containerd-398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400.scope: Deactivated successfully. Apr 17 23:46:37.855328 containerd[1988]: time="2026-04-17T23:46:37.855258878Z" level=info msg="shim disconnected" id=398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400 namespace=k8s.io Apr 17 23:46:37.855328 containerd[1988]: time="2026-04-17T23:46:37.855320504Z" level=warning msg="cleaning up after shim disconnected" id=398266893130eb5a42505cd8573e5661242476f21d168ac397f2d3ba9ab87400 namespace=k8s.io Apr 17 23:46:37.855328 containerd[1988]: time="2026-04-17T23:46:37.855331477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:37.883769 containerd[1988]: time="2026-04-17T23:46:37.883570702Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:46:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:46:38.540253 sshd[5023]: Accepted publickey for core from 20.229.252.112 port 51414 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:38.541866 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:38.547810 systemd-logind[1967]: New session 22 of user core. Apr 17 23:46:38.554986 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:46:38.769813 containerd[1988]: time="2026-04-17T23:46:38.769672987Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 17 23:46:38.795625 containerd[1988]: time="2026-04-17T23:46:38.795512118Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036\"" Apr 17 23:46:38.796358 containerd[1988]: time="2026-04-17T23:46:38.796294411Z" level=info msg="StartContainer for \"ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036\"" Apr 17 23:46:38.839967 systemd[1]: Started cri-containerd-ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036.scope - libcontainer container ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036. Apr 17 23:46:38.873646 containerd[1988]: time="2026-04-17T23:46:38.871246226Z" level=info msg="StartContainer for \"ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036\" returns successfully" Apr 17 23:46:38.883225 systemd[1]: cri-containerd-ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036.scope: Deactivated successfully. Apr 17 23:46:38.921528 containerd[1988]: time="2026-04-17T23:46:38.921454146Z" level=info msg="shim disconnected" id=ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036 namespace=k8s.io Apr 17 23:46:38.921528 containerd[1988]: time="2026-04-17T23:46:38.921532635Z" level=warning msg="cleaning up after shim disconnected" id=ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036 namespace=k8s.io Apr 17 23:46:38.921528 containerd[1988]: time="2026-04-17T23:46:38.921545907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:39.247068 sshd[5023]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:39.252318 systemd[1]: sshd@21-172.31.19.162:22-20.229.252.112:51414.service: Deactivated successfully. Apr 17 23:46:39.254670 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:46:39.255917 systemd-logind[1967]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:46:39.257215 systemd-logind[1967]: Removed session 22. Apr 17 23:46:39.429121 systemd[1]: Started sshd@22-172.31.19.162:22-20.229.252.112:51416.service - OpenSSH per-connection server daemon (20.229.252.112:51416). Apr 17 23:46:39.462810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccf2d1b8dc324d7443c5c8162550791b242efac2e454a0a19162568eff422036-rootfs.mount: Deactivated successfully. Apr 17 23:46:39.548814 kubelet[3199]: I0417 23:46:39.548228 3199 setters.go:543] "Node became not ready" node="ip-172-31-19-162" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-17T23:46:39Z","lastTransitionTime":"2026-04-17T23:46:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 17 23:46:39.772552 containerd[1988]: time="2026-04-17T23:46:39.772290639Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 17 23:46:39.799676 containerd[1988]: time="2026-04-17T23:46:39.799558624Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0\"" Apr 17 23:46:39.803167 containerd[1988]: time="2026-04-17T23:46:39.803126338Z" level=info msg="StartContainer for \"7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0\"" Apr 17 23:46:39.856574 systemd[1]: run-containerd-runc-k8s.io-7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0-runc.ZaMBi4.mount: Deactivated successfully. Apr 17 23:46:39.866971 systemd[1]: Started cri-containerd-7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0.scope - libcontainer container 7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0. Apr 17 23:46:39.901068 containerd[1988]: time="2026-04-17T23:46:39.901015730Z" level=info msg="StartContainer for \"7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0\" returns successfully" Apr 17 23:46:39.909630 systemd[1]: cri-containerd-7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0.scope: Deactivated successfully. Apr 17 23:46:39.950665 containerd[1988]: time="2026-04-17T23:46:39.950586305Z" level=info msg="shim disconnected" id=7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0 namespace=k8s.io Apr 17 23:46:39.950665 containerd[1988]: time="2026-04-17T23:46:39.950659345Z" level=warning msg="cleaning up after shim disconnected" id=7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0 namespace=k8s.io Apr 17 23:46:39.950665 containerd[1988]: time="2026-04-17T23:46:39.950671782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:39.966418 containerd[1988]: time="2026-04-17T23:46:39.966345841Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:46:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:46:40.455489 sshd[5194]: Accepted publickey for core from 20.229.252.112 port 51416 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:46:40.457240 sshd[5194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:46:40.462689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a5f9e139d0d781d84d8cfb85483aadb792dc574407c9cbd5b59e2fa865e84c0-rootfs.mount: Deactivated successfully. Apr 17 23:46:40.467004 systemd-logind[1967]: New session 23 of user core. Apr 17 23:46:40.473944 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 17 23:46:40.777911 containerd[1988]: time="2026-04-17T23:46:40.777858248Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 17 23:46:40.801326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4015774760.mount: Deactivated successfully. Apr 17 23:46:40.803125 containerd[1988]: time="2026-04-17T23:46:40.803075310Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673\"" Apr 17 23:46:40.806774 containerd[1988]: time="2026-04-17T23:46:40.806726428Z" level=info msg="StartContainer for \"06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673\"" Apr 17 23:46:40.849980 systemd[1]: Started cri-containerd-06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673.scope - libcontainer container 06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673. Apr 17 23:46:40.879259 systemd[1]: cri-containerd-06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673.scope: Deactivated successfully. Apr 17 23:46:40.886258 containerd[1988]: time="2026-04-17T23:46:40.884181423Z" level=info msg="StartContainer for \"06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673\" returns successfully" Apr 17 23:46:40.916522 containerd[1988]: time="2026-04-17T23:46:40.916434302Z" level=info msg="shim disconnected" id=06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673 namespace=k8s.io Apr 17 23:46:40.916522 containerd[1988]: time="2026-04-17T23:46:40.916510072Z" level=warning msg="cleaning up after shim disconnected" id=06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673 namespace=k8s.io Apr 17 23:46:40.916522 containerd[1988]: time="2026-04-17T23:46:40.916524102Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:46:41.462650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06289851891bf5255473d7db5e62f63eab5af31e5c7307b663dbfaf589fe3673-rootfs.mount: Deactivated successfully. Apr 17 23:46:41.783276 containerd[1988]: time="2026-04-17T23:46:41.783233295Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 17 23:46:41.811938 containerd[1988]: time="2026-04-17T23:46:41.811884096Z" level=info msg="CreateContainer within sandbox \"33b0ef4410b12d07b2dcb0e6391e2e364d9d086dfbd379ee0654f2d9412443a7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"35c43d0b9e4c6999af3149edc43cf1856f94a3cc94d7c27ad90063f0cf9300b9\"" Apr 17 23:46:41.812968 containerd[1988]: time="2026-04-17T23:46:41.812611064Z" level=info msg="StartContainer for \"35c43d0b9e4c6999af3149edc43cf1856f94a3cc94d7c27ad90063f0cf9300b9\"" Apr 17 23:46:41.850968 systemd[1]: Started cri-containerd-35c43d0b9e4c6999af3149edc43cf1856f94a3cc94d7c27ad90063f0cf9300b9.scope - libcontainer container 35c43d0b9e4c6999af3149edc43cf1856f94a3cc94d7c27ad90063f0cf9300b9. Apr 17 23:46:41.886041 containerd[1988]: time="2026-04-17T23:46:41.885995029Z" level=info msg="StartContainer for \"35c43d0b9e4c6999af3149edc43cf1856f94a3cc94d7c27ad90063f0cf9300b9\" returns successfully" Apr 17 23:46:42.393331 kubelet[3199]: E0417 23:46:42.392785 3199 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-66bc5c9577-nw49s" podUID="be238acd-d209-4dcf-a1a9-31aca086f6bc" Apr 17 23:46:42.558831 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 17 23:46:45.702630 systemd-networkd[1821]: lxc_health: Link UP Apr 17 23:46:45.710191 systemd-networkd[1821]: lxc_health: Gained carrier Apr 17 23:46:45.729633 (udev-worker)[5887]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:46:47.070028 systemd-networkd[1821]: lxc_health: Gained IPv6LL Apr 17 23:46:47.597067 kubelet[3199]: I0417 23:46:47.596992 3199 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g9wtq" podStartSLOduration=10.596970096 podStartE2EDuration="10.596970096s" podCreationTimestamp="2026-04-17 23:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:46:42.813769909 +0000 UTC m=+105.593914565" watchObservedRunningTime="2026-04-17 23:46:47.596970096 +0000 UTC m=+110.377114751" Apr 17 23:46:48.034568 systemd[1]: run-containerd-runc-k8s.io-35c43d0b9e4c6999af3149edc43cf1856f94a3cc94d7c27ad90063f0cf9300b9-runc.TjZ1Gs.mount: Deactivated successfully. Apr 17 23:46:49.968849 ntpd[1953]: Listen normally on 14 lxc_health [fe80::f8c6:91ff:fe53:9c8e%14]:123 Apr 17 23:46:49.969395 ntpd[1953]: 17 Apr 23:46:49 ntpd[1953]: Listen normally on 14 lxc_health [fe80::f8c6:91ff:fe53:9c8e%14]:123 Apr 17 23:46:52.612736 sshd[5194]: pam_unix(sshd:session): session closed for user core Apr 17 23:46:52.616992 systemd[1]: sshd@22-172.31.19.162:22-20.229.252.112:51416.service: Deactivated successfully. Apr 17 23:46:52.619432 systemd[1]: session-23.scope: Deactivated successfully. Apr 17 23:46:52.620537 systemd-logind[1967]: Session 23 logged out. Waiting for processes to exit. Apr 17 23:46:52.621934 systemd-logind[1967]: Removed session 23. Apr 17 23:46:57.384006 containerd[1988]: time="2026-04-17T23:46:57.383950509Z" level=info msg="StopPodSandbox for \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\"" Apr 17 23:46:57.384772 containerd[1988]: time="2026-04-17T23:46:57.384073119Z" level=info msg="TearDown network for sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" successfully" Apr 17 23:46:57.384772 containerd[1988]: time="2026-04-17T23:46:57.384091419Z" level=info msg="StopPodSandbox for \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" returns successfully" Apr 17 23:46:57.391186 containerd[1988]: time="2026-04-17T23:46:57.391128551Z" level=info msg="RemovePodSandbox for \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\"" Apr 17 23:46:57.394563 containerd[1988]: time="2026-04-17T23:46:57.394512421Z" level=info msg="Forcibly stopping sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\"" Apr 17 23:46:57.394754 containerd[1988]: time="2026-04-17T23:46:57.394620275Z" level=info msg="TearDown network for sandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" successfully" Apr 17 23:46:57.400312 containerd[1988]: time="2026-04-17T23:46:57.400082306Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:46:57.400469 containerd[1988]: time="2026-04-17T23:46:57.400338716Z" level=info msg="RemovePodSandbox \"7d5f6a9025acd3d10bd7d1b501c8598067375f52ac8dcfcb29e4b726ee865ac0\" returns successfully" Apr 17 23:46:57.401146 containerd[1988]: time="2026-04-17T23:46:57.401109405Z" level=info msg="StopPodSandbox for \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\"" Apr 17 23:46:57.401268 containerd[1988]: time="2026-04-17T23:46:57.401215477Z" level=info msg="TearDown network for sandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" successfully" Apr 17 23:46:57.401268 containerd[1988]: time="2026-04-17T23:46:57.401234849Z" level=info msg="StopPodSandbox for \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" returns successfully" Apr 17 23:46:57.401594 containerd[1988]: time="2026-04-17T23:46:57.401564182Z" level=info msg="RemovePodSandbox for \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\"" Apr 17 23:46:57.401680 containerd[1988]: time="2026-04-17T23:46:57.401594532Z" level=info msg="Forcibly stopping sandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\"" Apr 17 23:46:57.401680 containerd[1988]: time="2026-04-17T23:46:57.401657612Z" level=info msg="TearDown network for sandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" successfully" Apr 17 23:46:57.406924 containerd[1988]: time="2026-04-17T23:46:57.406858611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:46:57.407101 containerd[1988]: time="2026-04-17T23:46:57.406967460Z" level=info msg="RemovePodSandbox \"2b081fa886b212ef8bb0f646ef285e5c07395b873c5d79e66b073da35799517d\" returns successfully" Apr 17 23:47:07.140076 systemd[1]: cri-containerd-03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c.scope: Deactivated successfully. Apr 17 23:47:07.140575 systemd[1]: cri-containerd-03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c.scope: Consumed 2.583s CPU time, 15.8M memory peak, 0B memory swap peak. Apr 17 23:47:07.169008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c-rootfs.mount: Deactivated successfully. Apr 17 23:47:07.192594 containerd[1988]: time="2026-04-17T23:47:07.192517461Z" level=info msg="shim disconnected" id=03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c namespace=k8s.io Apr 17 23:47:07.192594 containerd[1988]: time="2026-04-17T23:47:07.192587577Z" level=warning msg="cleaning up after shim disconnected" id=03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c namespace=k8s.io Apr 17 23:47:07.192594 containerd[1988]: time="2026-04-17T23:47:07.192599475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:47:07.850286 kubelet[3199]: I0417 23:47:07.850238 3199 scope.go:117] "RemoveContainer" containerID="03132a6e85d1ef5844df2612c0787ae9f793280310293b481378ae4557e1756c" Apr 17 23:47:07.852963 containerd[1988]: time="2026-04-17T23:47:07.852910365Z" level=info msg="CreateContainer within sandbox \"3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:47:07.877385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703021223.mount: Deactivated successfully. Apr 17 23:47:07.883581 containerd[1988]: time="2026-04-17T23:47:07.883518423Z" level=info msg="CreateContainer within sandbox \"3da28228328cda6cdbdb0dea7699e5e36ac24aaf1834d1c277bd55eb1e99eac8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5bc9a510fd894a6ff35eb25c10ceb6dc892d84c7fd944ae3968ce08aa28872ec\"" Apr 17 23:47:07.884674 containerd[1988]: time="2026-04-17T23:47:07.884630242Z" level=info msg="StartContainer for \"5bc9a510fd894a6ff35eb25c10ceb6dc892d84c7fd944ae3968ce08aa28872ec\"" Apr 17 23:47:07.931002 systemd[1]: Started cri-containerd-5bc9a510fd894a6ff35eb25c10ceb6dc892d84c7fd944ae3968ce08aa28872ec.scope - libcontainer container 5bc9a510fd894a6ff35eb25c10ceb6dc892d84c7fd944ae3968ce08aa28872ec. Apr 17 23:47:07.989761 containerd[1988]: time="2026-04-17T23:47:07.988300284Z" level=info msg="StartContainer for \"5bc9a510fd894a6ff35eb25c10ceb6dc892d84c7fd944ae3968ce08aa28872ec\" returns successfully" Apr 17 23:47:10.376490 kubelet[3199]: E0417 23:47:10.376418 3199 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-162?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 17 23:47:11.767175 systemd[1]: cri-containerd-be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed.scope: Deactivated successfully. Apr 17 23:47:11.768989 systemd[1]: cri-containerd-be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed.scope: Consumed 1.910s CPU time, 13.6M memory peak, 0B memory swap peak. Apr 17 23:47:11.796490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed-rootfs.mount: Deactivated successfully. Apr 17 23:47:11.820766 containerd[1988]: time="2026-04-17T23:47:11.820660142Z" level=info msg="shim disconnected" id=be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed namespace=k8s.io Apr 17 23:47:11.820766 containerd[1988]: time="2026-04-17T23:47:11.820750812Z" level=warning msg="cleaning up after shim disconnected" id=be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed namespace=k8s.io Apr 17 23:47:11.820766 containerd[1988]: time="2026-04-17T23:47:11.820764321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:47:11.865314 kubelet[3199]: I0417 23:47:11.865273 3199 scope.go:117] "RemoveContainer" containerID="be5f5d573d7399b3013ba1c631a5359444bb85febf70969bb2c99fe08b5fe8ed" Apr 17 23:47:11.867475 containerd[1988]: time="2026-04-17T23:47:11.867415749Z" level=info msg="CreateContainer within sandbox \"a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:47:11.892959 containerd[1988]: time="2026-04-17T23:47:11.892907841Z" level=info msg="CreateContainer within sandbox \"a610b851b32415d266437d9b2159dc42c9bf7e53ed4d976ddff20956059ceed9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f7fccbb60dacb2786a76350a2f2adb667323e65fecfa94df4e039ff69c15e922\"" Apr 17 23:47:11.893643 containerd[1988]: time="2026-04-17T23:47:11.893604561Z" level=info msg="StartContainer for \"f7fccbb60dacb2786a76350a2f2adb667323e65fecfa94df4e039ff69c15e922\"" Apr 17 23:47:11.929904 systemd[1]: Started cri-containerd-f7fccbb60dacb2786a76350a2f2adb667323e65fecfa94df4e039ff69c15e922.scope - libcontainer container f7fccbb60dacb2786a76350a2f2adb667323e65fecfa94df4e039ff69c15e922. Apr 17 23:47:11.977876 containerd[1988]: time="2026-04-17T23:47:11.977821587Z" level=info msg="StartContainer for \"f7fccbb60dacb2786a76350a2f2adb667323e65fecfa94df4e039ff69c15e922\" returns successfully"