Apr 21 10:17:06.965955 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:17:06.965993 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:17:06.966012 kernel: BIOS-provided physical RAM map: Apr 21 10:17:06.966022 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:17:06.966032 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 21 10:17:06.966043 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 21 10:17:06.966056 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 21 10:17:06.966131 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 21 10:17:06.966144 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 21 10:17:06.966160 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 21 10:17:06.966172 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 21 10:17:06.966184 kernel: NX (Execute Disable) protection: active Apr 21 10:17:06.966195 kernel: APIC: Static calls initialized Apr 21 10:17:06.966208 kernel: efi: EFI v2.7 by EDK II Apr 21 10:17:06.966224 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 21 10:17:06.966240 kernel: SMBIOS 2.7 present. Apr 21 10:17:06.966253 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 21 10:17:06.966267 kernel: Hypervisor detected: KVM Apr 21 10:17:06.966280 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:17:06.966293 kernel: kvm-clock: using sched offset of 4385735130 cycles Apr 21 10:17:06.966307 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:17:06.966322 kernel: tsc: Detected 2499.996 MHz processor Apr 21 10:17:06.966336 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:17:06.966350 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:17:06.966364 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 21 10:17:06.966381 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:17:06.966395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:17:06.966408 kernel: Using GB pages for direct mapping Apr 21 10:17:06.966421 kernel: Secure boot disabled Apr 21 10:17:06.966435 kernel: ACPI: Early table checksum verification disabled Apr 21 10:17:06.966448 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 21 10:17:06.966462 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 21 10:17:06.966475 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 21 10:17:06.966489 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 21 10:17:06.966506 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 21 10:17:06.966519 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 21 10:17:06.966533 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 21 10:17:06.966546 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 21 10:17:06.966559 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 21 10:17:06.966573 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 21 10:17:06.966593 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:17:06.966611 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:17:06.966625 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 21 10:17:06.966640 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 21 10:17:06.966653 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 21 10:17:06.966665 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 21 10:17:06.966677 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 21 10:17:06.966689 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 21 10:17:06.966706 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 21 10:17:06.966719 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 21 10:17:06.966754 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 21 10:17:06.966768 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 21 10:17:06.966781 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 21 10:17:06.966794 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 21 10:17:06.966808 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 10:17:06.966822 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 10:17:06.966836 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 21 10:17:06.966853 kernel: NUMA: Initialized distance table, cnt=1 Apr 21 10:17:06.966866 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 21 10:17:06.966879 kernel: Zone ranges: Apr 21 10:17:06.966893 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:17:06.966906 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 21 10:17:06.966920 kernel: Normal empty Apr 21 10:17:06.966933 kernel: Movable zone start for each node Apr 21 10:17:06.966946 kernel: Early memory node ranges Apr 21 10:17:06.966959 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:17:06.966975 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 21 10:17:06.966989 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 21 10:17:06.967003 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 21 10:17:06.967017 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:17:06.967030 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:17:06.967043 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:17:06.967057 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 21 10:17:06.967070 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 21 10:17:06.967084 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:17:06.967101 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 21 10:17:06.967114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:17:06.967128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:17:06.967141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:17:06.967156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:17:06.967168 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:17:06.967182 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:17:06.967196 kernel: TSC deadline timer available Apr 21 10:17:06.967210 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:17:06.967223 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:17:06.967241 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 21 10:17:06.967254 kernel: Booting paravirtualized kernel on KVM Apr 21 10:17:06.967268 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:17:06.967282 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:17:06.967297 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:17:06.967311 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:17:06.967324 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:17:06.967337 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:17:06.967359 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:17:06.967378 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:17:06.967392 kernel: random: crng init done Apr 21 10:17:06.967405 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:17:06.967419 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 10:17:06.967432 kernel: Fallback order for Node 0: 0 Apr 21 10:17:06.967445 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 21 10:17:06.967458 kernel: Policy zone: DMA32 Apr 21 10:17:06.967471 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:17:06.967488 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 21 10:17:06.967502 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:17:06.967517 kernel: Kernel/User page tables isolation: enabled Apr 21 10:17:06.967530 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:17:06.967544 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:17:06.967557 kernel: Dynamic Preempt: voluntary Apr 21 10:17:06.967571 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:17:06.967585 kernel: rcu: RCU event tracing is enabled. Apr 21 10:17:06.967599 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:17:06.967616 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:17:06.967630 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:17:06.967643 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:17:06.967656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:17:06.967670 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:17:06.967683 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:17:06.967696 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:17:06.967723 kernel: Console: colour dummy device 80x25 Apr 21 10:17:06.967748 kernel: printk: console [tty0] enabled Apr 21 10:17:06.967762 kernel: printk: console [ttyS0] enabled Apr 21 10:17:06.967776 kernel: ACPI: Core revision 20230628 Apr 21 10:17:06.967791 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 21 10:17:06.967809 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:17:06.967824 kernel: x2apic enabled Apr 21 10:17:06.967837 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:17:06.967853 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 21 10:17:06.967867 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 21 10:17:06.967884 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 10:17:06.967898 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 10:17:06.967912 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:17:06.967927 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:17:06.967940 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:17:06.967955 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:17:06.967969 kernel: RETBleed: Vulnerable Apr 21 10:17:06.967983 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:17:06.967996 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:17:06.968011 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:17:06.968028 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:17:06.968041 kernel: active return thunk: its_return_thunk Apr 21 10:17:06.968055 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:17:06.968069 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:17:06.968083 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:17:06.968097 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:17:06.968111 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 21 10:17:06.968126 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 21 10:17:06.968140 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:17:06.968154 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:17:06.968167 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:17:06.968184 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:17:06.968198 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:17:06.968212 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 21 10:17:06.968226 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 21 10:17:06.968240 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 21 10:17:06.968253 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 21 10:17:06.968267 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 21 10:17:06.968281 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 21 10:17:06.968295 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 21 10:17:06.968309 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:17:06.968322 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:17:06.968338 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:17:06.968351 kernel: landlock: Up and running. Apr 21 10:17:06.968364 kernel: SELinux: Initializing. Apr 21 10:17:06.968390 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:17:06.968413 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:17:06.968428 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 10:17:06.968442 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:17:06.968457 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:17:06.968470 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:17:06.968485 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 10:17:06.968504 kernel: signal: max sigframe size: 3632 Apr 21 10:17:06.968519 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:17:06.968536 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:17:06.968551 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:17:06.968567 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:17:06.968583 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:17:06.968598 kernel: .... node #0, CPUs: #1 Apr 21 10:17:06.968615 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 21 10:17:06.968631 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 10:17:06.968650 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:17:06.968666 kernel: smpboot: Max logical packages: 1 Apr 21 10:17:06.968683 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 21 10:17:06.968698 kernel: devtmpfs: initialized Apr 21 10:17:06.968715 kernel: x86/mm: Memory block size: 128MB Apr 21 10:17:06.968760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 21 10:17:06.968776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:17:06.968792 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:17:06.968808 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:17:06.968827 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:17:06.968843 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:17:06.968859 kernel: audit: type=2000 audit(1776766626.922:1): state=initialized audit_enabled=0 res=1 Apr 21 10:17:06.968874 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:17:06.968890 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:17:06.968906 kernel: cpuidle: using governor menu Apr 21 10:17:06.968922 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:17:06.968938 kernel: dca service started, version 1.12.1 Apr 21 10:17:06.968954 kernel: PCI: Using configuration type 1 for base access Apr 21 10:17:06.968973 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:17:06.968989 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:17:06.969005 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:17:06.969021 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:17:06.969036 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:17:06.969051 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:17:06.969066 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:17:06.969081 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:17:06.969096 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 21 10:17:06.969115 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:17:06.969129 kernel: ACPI: Interpreter enabled Apr 21 10:17:06.969145 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:17:06.969160 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:17:06.969175 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:17:06.969190 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:17:06.969205 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 21 10:17:06.969220 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:17:06.969477 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:17:06.969619 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 21 10:17:06.969771 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 21 10:17:06.969789 kernel: acpiphp: Slot [3] registered Apr 21 10:17:06.969805 kernel: acpiphp: Slot [4] registered Apr 21 10:17:06.969820 kernel: acpiphp: Slot [5] registered Apr 21 10:17:06.969836 kernel: acpiphp: Slot [6] registered Apr 21 10:17:06.969850 kernel: acpiphp: Slot [7] registered Apr 21 10:17:06.969869 kernel: acpiphp: Slot [8] registered Apr 21 10:17:06.969884 kernel: acpiphp: Slot [9] registered Apr 21 10:17:06.969899 kernel: acpiphp: Slot [10] registered Apr 21 10:17:06.969914 kernel: acpiphp: Slot [11] registered Apr 21 10:17:06.969928 kernel: acpiphp: Slot [12] registered Apr 21 10:17:06.969943 kernel: acpiphp: Slot [13] registered Apr 21 10:17:06.969959 kernel: acpiphp: Slot [14] registered Apr 21 10:17:06.969974 kernel: acpiphp: Slot [15] registered Apr 21 10:17:06.969989 kernel: acpiphp: Slot [16] registered Apr 21 10:17:06.970004 kernel: acpiphp: Slot [17] registered Apr 21 10:17:06.970023 kernel: acpiphp: Slot [18] registered Apr 21 10:17:06.970038 kernel: acpiphp: Slot [19] registered Apr 21 10:17:06.970052 kernel: acpiphp: Slot [20] registered Apr 21 10:17:06.970067 kernel: acpiphp: Slot [21] registered Apr 21 10:17:06.970082 kernel: acpiphp: Slot [22] registered Apr 21 10:17:06.970097 kernel: acpiphp: Slot [23] registered Apr 21 10:17:06.970112 kernel: acpiphp: Slot [24] registered Apr 21 10:17:06.970127 kernel: acpiphp: Slot [25] registered Apr 21 10:17:06.970142 kernel: acpiphp: Slot [26] registered Apr 21 10:17:06.970160 kernel: acpiphp: Slot [27] registered Apr 21 10:17:06.970175 kernel: acpiphp: Slot [28] registered Apr 21 10:17:06.970190 kernel: acpiphp: Slot [29] registered Apr 21 10:17:06.970205 kernel: acpiphp: Slot [30] registered Apr 21 10:17:06.970220 kernel: acpiphp: Slot [31] registered Apr 21 10:17:06.970234 kernel: PCI host bridge to bus 0000:00 Apr 21 10:17:06.970363 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:17:06.970486 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:17:06.970638 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:17:06.971623 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 21 10:17:06.971798 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 21 10:17:06.971932 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:17:06.972108 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 21 10:17:06.972262 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 21 10:17:06.972410 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 21 10:17:06.972552 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 21 10:17:06.972686 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 21 10:17:06.974904 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 21 10:17:06.975064 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 21 10:17:06.975202 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 21 10:17:06.975351 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 21 10:17:06.975486 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 21 10:17:06.975634 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 21 10:17:06.975778 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 21 10:17:06.975913 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:17:06.976046 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 21 10:17:06.976816 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:17:06.977004 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 21 10:17:06.977158 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 21 10:17:06.977307 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 21 10:17:06.977450 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 21 10:17:06.977470 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:17:06.977485 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:17:06.977499 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:17:06.977516 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:17:06.977532 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 21 10:17:06.977551 kernel: iommu: Default domain type: Translated Apr 21 10:17:06.977567 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:17:06.977583 kernel: efivars: Registered efivars operations Apr 21 10:17:06.977598 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:17:06.977614 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:17:06.977630 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 21 10:17:06.977645 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 21 10:17:06.978926 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 21 10:17:06.979109 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 21 10:17:06.979252 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:17:06.979272 kernel: vgaarb: loaded Apr 21 10:17:06.979289 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 21 10:17:06.979305 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 21 10:17:06.979320 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:17:06.979336 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:17:06.979438 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:17:06.979454 kernel: pnp: PnP ACPI init Apr 21 10:17:06.979470 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:17:06.979490 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:17:06.979506 kernel: NET: Registered PF_INET protocol family Apr 21 10:17:06.979522 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:17:06.979538 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 21 10:17:06.979554 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:17:06.979570 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 10:17:06.979585 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 21 10:17:06.979601 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 21 10:17:06.979620 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:17:06.979635 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:17:06.979651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:17:06.979666 kernel: NET: Registered PF_XDP protocol family Apr 21 10:17:06.980876 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:17:06.981026 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:17:06.981154 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:17:06.981279 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 21 10:17:06.981403 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 21 10:17:06.981556 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 21 10:17:06.981577 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:17:06.981592 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:17:06.981607 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 21 10:17:06.981620 kernel: clocksource: Switched to clocksource tsc Apr 21 10:17:06.981635 kernel: Initialise system trusted keyrings Apr 21 10:17:06.981649 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 21 10:17:06.981663 kernel: Key type asymmetric registered Apr 21 10:17:06.981682 kernel: Asymmetric key parser 'x509' registered Apr 21 10:17:06.981696 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:17:06.981710 kernel: io scheduler mq-deadline registered Apr 21 10:17:06.981724 kernel: io scheduler kyber registered Apr 21 10:17:06.981771 kernel: io scheduler bfq registered Apr 21 10:17:06.981785 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:17:06.981800 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:17:06.981813 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:17:06.981828 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:17:06.981845 kernel: i8042: Warning: Keylock active Apr 21 10:17:06.981859 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:17:06.981873 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:17:06.982017 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 21 10:17:06.982146 kernel: rtc_cmos 00:00: registered as rtc0 Apr 21 10:17:06.982271 kernel: rtc_cmos 00:00: setting system clock to 2026-04-21T10:17:06 UTC (1776766626) Apr 21 10:17:06.982393 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 21 10:17:06.982411 kernel: intel_pstate: CPU model not supported Apr 21 10:17:06.982430 kernel: efifb: probing for efifb Apr 21 10:17:06.982445 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 21 10:17:06.982459 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 21 10:17:06.982473 kernel: efifb: scrolling: redraw Apr 21 10:17:06.982487 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:17:06.982501 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:17:06.982515 kernel: fb0: EFI VGA frame buffer device Apr 21 10:17:06.982529 kernel: pstore: Using crash dump compression: deflate Apr 21 10:17:06.982543 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:17:06.982562 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:17:06.982576 kernel: Segment Routing with IPv6 Apr 21 10:17:06.982590 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:17:06.982606 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:17:06.982619 kernel: Key type dns_resolver registered Apr 21 10:17:06.982634 kernel: IPI shorthand broadcast: enabled Apr 21 10:17:06.982674 kernel: sched_clock: Marking stable (474002460, 161158234)->(735030766, -99870072) Apr 21 10:17:06.982693 kernel: registered taskstats version 1 Apr 21 10:17:06.982708 kernel: Loading compiled-in X.509 certificates Apr 21 10:17:06.984582 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:17:06.984617 kernel: Key type .fscrypt registered Apr 21 10:17:06.984636 kernel: Key type fscrypt-provisioning registered Apr 21 10:17:06.984653 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:17:06.984672 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:17:06.984689 kernel: ima: No architecture policies found Apr 21 10:17:06.984707 kernel: clk: Disabling unused clocks Apr 21 10:17:06.984724 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:17:06.985809 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:17:06.985843 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:17:06.985861 kernel: Run /init as init process Apr 21 10:17:06.985879 kernel: with arguments: Apr 21 10:17:06.985896 kernel: /init Apr 21 10:17:06.985913 kernel: with environment: Apr 21 10:17:06.985929 kernel: HOME=/ Apr 21 10:17:06.985946 kernel: TERM=linux Apr 21 10:17:06.985968 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:17:06.985992 systemd[1]: Detected virtualization amazon. Apr 21 10:17:06.986010 systemd[1]: Detected architecture x86-64. Apr 21 10:17:06.986028 systemd[1]: Running in initrd. Apr 21 10:17:06.986046 systemd[1]: No hostname configured, using default hostname. Apr 21 10:17:06.986066 systemd[1]: Hostname set to . Apr 21 10:17:06.986085 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:17:06.986103 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:17:06.986121 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:17:06.986142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:17:06.986161 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:17:06.986180 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:17:06.986198 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:17:06.986220 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:17:06.986244 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:17:06.986263 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:17:06.986281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:17:06.986299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:17:06.986318 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:17:06.986336 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:17:06.986355 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:17:06.986376 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:17:06.986394 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:17:06.986413 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:17:06.986431 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:17:06.986450 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:17:06.986468 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:17:06.986487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:17:06.986505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:17:06.986524 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:17:06.986545 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:17:06.986563 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:17:06.986582 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:17:06.986600 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:17:06.986618 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:17:06.986636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:17:06.986655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:17:06.986711 systemd-journald[179]: Collecting audit messages is disabled. Apr 21 10:17:06.986771 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:17:06.986786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:17:06.986800 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:17:06.986821 systemd-journald[179]: Journal started Apr 21 10:17:06.986857 systemd-journald[179]: Runtime Journal (/run/log/journal/ec242f3b19d660c3b5c8017b1e6934e8) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:17:06.976690 systemd-modules-load[180]: Inserted module 'overlay' Apr 21 10:17:06.991894 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:17:07.005962 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:17:07.008990 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:17:07.012939 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:17:07.016311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:17:07.029802 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:17:07.034043 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:17:07.038821 kernel: Bridge firewalling registered Apr 21 10:17:07.035185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:17:07.038830 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 21 10:17:07.046132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:17:07.047173 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:17:07.051578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:17:07.059990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:17:07.063095 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:17:07.075706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:17:07.084111 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:17:07.085161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:17:07.095023 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:17:07.101899 dracut-cmdline[214]: dracut-dracut-053 Apr 21 10:17:07.105846 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:17:07.143912 systemd-resolved[216]: Positive Trust Anchors: Apr 21 10:17:07.143936 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:17:07.144001 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:17:07.150688 systemd-resolved[216]: Defaulting to hostname 'linux'. Apr 21 10:17:07.154352 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:17:07.156554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:17:07.197776 kernel: SCSI subsystem initialized Apr 21 10:17:07.207774 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:17:07.218752 kernel: iscsi: registered transport (tcp) Apr 21 10:17:07.240975 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:17:07.241057 kernel: QLogic iSCSI HBA Driver Apr 21 10:17:07.281499 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:17:07.288909 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:17:07.315035 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:17:07.315117 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:17:07.315139 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:17:07.357754 kernel: raid6: avx512x4 gen() 17513 MB/s Apr 21 10:17:07.375761 kernel: raid6: avx512x2 gen() 17266 MB/s Apr 21 10:17:07.393757 kernel: raid6: avx512x1 gen() 17358 MB/s Apr 21 10:17:07.411759 kernel: raid6: avx2x4 gen() 17386 MB/s Apr 21 10:17:07.429756 kernel: raid6: avx2x2 gen() 17342 MB/s Apr 21 10:17:07.448038 kernel: raid6: avx2x1 gen() 13026 MB/s Apr 21 10:17:07.448106 kernel: raid6: using algorithm avx512x4 gen() 17513 MB/s Apr 21 10:17:07.466969 kernel: raid6: .... xor() 7447 MB/s, rmw enabled Apr 21 10:17:07.467033 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:17:07.488773 kernel: xor: automatically using best checksumming function avx Apr 21 10:17:07.649761 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:17:07.660051 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:17:07.668949 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:17:07.682481 systemd-udevd[399]: Using default interface naming scheme 'v255'. Apr 21 10:17:07.687705 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:17:07.697033 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:17:07.716423 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Apr 21 10:17:07.747779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:17:07.752940 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:17:07.806720 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:17:07.817175 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:17:07.846357 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:17:07.849317 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:17:07.850849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:17:07.851548 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:17:07.858024 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:17:07.891046 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:17:07.927757 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 21 10:17:07.928073 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 21 10:17:07.933401 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:17:07.934322 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:17:07.934572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:17:07.937198 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:17:07.945649 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 21 10:17:07.945930 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:34:7f:74:65:fb Apr 21 10:17:07.946839 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:17:07.947956 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:17:07.950523 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:17:07.951917 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:17:07.960769 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:17:07.960844 kernel: AES CTR mode by8 optimization enabled Apr 21 10:17:07.960210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:17:07.973522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:17:07.974416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:17:07.987780 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 21 10:17:07.988115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:17:07.994889 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 21 10:17:08.005235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:17:08.012819 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 21 10:17:08.013557 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:17:08.019145 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:17:08.019205 kernel: GPT:9289727 != 33554431 Apr 21 10:17:08.019228 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:17:08.020798 kernel: GPT:9289727 != 33554431 Apr 21 10:17:08.022487 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:17:08.022526 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:17:08.043355 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:17:08.105761 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (457) Apr 21 10:17:08.111793 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (444) Apr 21 10:17:08.178546 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 21 10:17:08.196704 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 21 10:17:08.202718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 21 10:17:08.204335 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 21 10:17:08.210982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:17:08.217942 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:17:08.224800 disk-uuid[628]: Primary Header is updated. Apr 21 10:17:08.224800 disk-uuid[628]: Secondary Entries is updated. Apr 21 10:17:08.224800 disk-uuid[628]: Secondary Header is updated. Apr 21 10:17:08.229778 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:17:08.236762 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:17:08.243757 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:17:09.246230 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:17:09.246302 disk-uuid[629]: The operation has completed successfully. Apr 21 10:17:09.394504 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:17:09.394633 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:17:09.423007 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:17:09.426967 sh[972]: Success Apr 21 10:17:09.441756 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 10:17:09.552657 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:17:09.561868 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:17:09.565708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:17:09.612096 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:17:09.612171 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:17:09.613991 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:17:09.615866 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:17:09.618184 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:17:09.709798 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:17:09.740381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:17:09.741759 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:17:09.748972 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:17:09.753126 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:17:09.770783 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:17:09.774142 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:17:09.774209 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:17:09.788762 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:17:09.802140 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:17:09.805151 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:17:09.813627 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:17:09.822061 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:17:09.860511 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:17:09.869944 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:17:09.890858 systemd-networkd[1164]: lo: Link UP Apr 21 10:17:09.890869 systemd-networkd[1164]: lo: Gained carrier Apr 21 10:17:09.892719 systemd-networkd[1164]: Enumeration completed Apr 21 10:17:09.892852 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:17:09.893818 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:17:09.893824 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:17:09.895233 systemd[1]: Reached target network.target - Network. Apr 21 10:17:09.897212 systemd-networkd[1164]: eth0: Link UP Apr 21 10:17:09.897217 systemd-networkd[1164]: eth0: Gained carrier Apr 21 10:17:09.897230 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:17:09.910843 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.28.88/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:17:10.165470 ignition[1104]: Ignition 2.19.0 Apr 21 10:17:10.165484 ignition[1104]: Stage: fetch-offline Apr 21 10:17:10.165769 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:10.165785 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:10.166378 ignition[1104]: Ignition finished successfully Apr 21 10:17:10.168543 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:17:10.173994 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:17:10.188772 ignition[1173]: Ignition 2.19.0 Apr 21 10:17:10.188786 ignition[1173]: Stage: fetch Apr 21 10:17:10.189236 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:10.189250 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:10.189371 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:10.197971 ignition[1173]: PUT result: OK Apr 21 10:17:10.199763 ignition[1173]: parsed url from cmdline: "" Apr 21 10:17:10.199774 ignition[1173]: no config URL provided Apr 21 10:17:10.199786 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:17:10.199802 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:17:10.199848 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:10.200445 ignition[1173]: PUT result: OK Apr 21 10:17:10.200523 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 21 10:17:10.201287 ignition[1173]: GET result: OK Apr 21 10:17:10.201381 ignition[1173]: parsing config with SHA512: 3e4b436be6e01c42610a9bb0adbd984ff6ced659ed114542e03ba703bfca6cb5603bd6c3f3dfe75ae4d096bd65039caea8f42dc1934e414b6a418302dfd7c23e Apr 21 10:17:10.207147 unknown[1173]: fetched base config from "system" Apr 21 10:17:10.207176 unknown[1173]: fetched base config from "system" Apr 21 10:17:10.208412 ignition[1173]: fetch: fetch complete Apr 21 10:17:10.207187 unknown[1173]: fetched user config from "aws" Apr 21 10:17:10.208421 ignition[1173]: fetch: fetch passed Apr 21 10:17:10.208496 ignition[1173]: Ignition finished successfully Apr 21 10:17:10.210851 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:17:10.216061 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:17:10.232698 ignition[1179]: Ignition 2.19.0 Apr 21 10:17:10.232713 ignition[1179]: Stage: kargs Apr 21 10:17:10.233202 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:10.233218 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:10.233337 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:10.235528 ignition[1179]: PUT result: OK Apr 21 10:17:10.239137 ignition[1179]: kargs: kargs passed Apr 21 10:17:10.239219 ignition[1179]: Ignition finished successfully Apr 21 10:17:10.240828 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:17:10.246071 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:17:10.262649 ignition[1185]: Ignition 2.19.0 Apr 21 10:17:10.262663 ignition[1185]: Stage: disks Apr 21 10:17:10.263161 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:10.263176 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:10.263425 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:10.264261 ignition[1185]: PUT result: OK Apr 21 10:17:10.266861 ignition[1185]: disks: disks passed Apr 21 10:17:10.266954 ignition[1185]: Ignition finished successfully Apr 21 10:17:10.268880 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:17:10.269506 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:17:10.269913 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:17:10.270456 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:17:10.271028 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:17:10.271763 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:17:10.283038 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:17:10.317272 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:17:10.321103 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:17:10.326861 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:17:10.430755 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:17:10.431132 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:17:10.432491 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:17:10.461880 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:17:10.466789 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:17:10.469035 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:17:10.469878 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:17:10.469918 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:17:10.481746 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1212) Apr 21 10:17:10.482133 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:17:10.489332 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:17:10.489370 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:17:10.489392 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:17:10.494962 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:17:10.499803 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:17:10.501451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:17:10.847399 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:17:10.863465 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:17:10.868943 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:17:10.874251 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:17:11.095542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:17:11.099852 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:17:11.103923 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:17:11.113970 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:17:11.116835 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:17:11.155813 ignition[1331]: INFO : Ignition 2.19.0 Apr 21 10:17:11.157543 ignition[1331]: INFO : Stage: mount Apr 21 10:17:11.157543 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:11.157543 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:11.157543 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:11.156313 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:17:11.160398 ignition[1331]: INFO : PUT result: OK Apr 21 10:17:11.162947 ignition[1331]: INFO : mount: mount passed Apr 21 10:17:11.163664 ignition[1331]: INFO : Ignition finished successfully Apr 21 10:17:11.164574 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:17:11.170867 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:17:11.178851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:17:11.202174 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1343) Apr 21 10:17:11.202256 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:17:11.205699 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:17:11.205780 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:17:11.212750 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:17:11.214325 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:17:11.239029 ignition[1359]: INFO : Ignition 2.19.0 Apr 21 10:17:11.239029 ignition[1359]: INFO : Stage: files Apr 21 10:17:11.240550 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:11.240550 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:11.240550 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:11.241813 ignition[1359]: INFO : PUT result: OK Apr 21 10:17:11.244851 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:17:11.245561 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:17:11.245561 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:17:11.260814 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:17:11.261788 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:17:11.262805 unknown[1359]: wrote ssh authorized keys file for user: core Apr 21 10:17:11.263630 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:17:11.273990 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:17:11.275078 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:17:11.275078 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:17:11.275078 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:17:11.393975 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:17:11.536265 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:17:11.537277 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:17:11.537277 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 21 10:17:11.744913 systemd-networkd[1164]: eth0: Gained IPv6LL Apr 21 10:17:11.832421 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 21 10:17:12.073715 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 21 10:17:12.073715 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:17:12.078279 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:17:12.085344 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:17:12.085344 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:17:12.085344 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:17:12.085344 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:17:12.552065 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 21 10:17:12.855598 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:17:12.855598 ignition[1359]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:17:12.857910 ignition[1359]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:17:12.873824 ignition[1359]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:17:12.873824 ignition[1359]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:17:12.873824 ignition[1359]: INFO : files: files passed Apr 21 10:17:12.873824 ignition[1359]: INFO : Ignition finished successfully Apr 21 10:17:12.860116 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:17:12.865043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:17:12.870135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:17:12.887171 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:17:12.887398 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:17:12.895954 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:17:12.895954 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:17:12.899973 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:17:12.900295 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:17:12.902080 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:17:12.907950 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:17:12.934252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:17:12.934407 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:17:12.935688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:17:12.936833 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:17:12.938068 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:17:12.942933 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:17:12.957228 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:17:12.961967 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:17:12.975201 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:17:12.976011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:17:12.977052 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:17:12.977950 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:17:12.978139 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:17:12.979395 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:17:12.980306 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:17:12.981142 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:17:12.981917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:17:12.982686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:17:12.983554 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:17:12.984317 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:17:12.985112 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:17:12.986261 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:17:12.987031 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:17:12.987823 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:17:12.988010 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:17:12.989089 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:17:12.989895 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:17:12.990568 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:17:12.990919 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:17:12.991441 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:17:12.991617 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:17:12.993041 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:17:12.993227 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:17:12.993944 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:17:12.994094 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:17:13.001084 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:17:13.001742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:17:13.001937 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:17:13.006059 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:17:13.007290 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:17:13.007558 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:17:13.008519 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:17:13.008722 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:17:13.023811 ignition[1413]: INFO : Ignition 2.19.0 Apr 21 10:17:13.023811 ignition[1413]: INFO : Stage: umount Apr 21 10:17:13.025108 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:17:13.027091 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:17:13.027091 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:17:13.027091 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:17:13.025250 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:17:13.032406 ignition[1413]: INFO : PUT result: OK Apr 21 10:17:13.033790 ignition[1413]: INFO : umount: umount passed Apr 21 10:17:13.033790 ignition[1413]: INFO : Ignition finished successfully Apr 21 10:17:13.036482 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:17:13.036598 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:17:13.037410 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:17:13.037474 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:17:13.038188 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:17:13.038248 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:17:13.038946 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:17:13.039002 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:17:13.039710 systemd[1]: Stopped target network.target - Network. Apr 21 10:17:13.040353 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:17:13.040417 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:17:13.043187 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:17:13.044310 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:17:13.049813 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:17:13.050357 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:17:13.050854 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:17:13.051420 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:17:13.051484 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:17:13.051959 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:17:13.052011 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:17:13.052960 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:17:13.053031 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:17:13.053624 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:17:13.053686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:17:13.054431 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:17:13.055949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:17:13.057986 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:17:13.061576 systemd-networkd[1164]: eth0: DHCPv6 lease lost Apr 21 10:17:13.063913 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:17:13.064047 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:17:13.065067 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:17:13.065219 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:17:13.067882 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:17:13.068056 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:17:13.070195 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:17:13.070260 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:17:13.071098 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:17:13.071167 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:17:13.078978 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:17:13.079822 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:17:13.079929 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:17:13.080543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:17:13.080605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:17:13.081215 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:17:13.081271 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:17:13.081686 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:17:13.081724 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:17:13.085306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:17:13.098944 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:17:13.099908 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:17:13.101648 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:17:13.101870 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:17:13.103115 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:17:13.103174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:17:13.104081 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:17:13.104129 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:17:13.104886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:17:13.104948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:17:13.105942 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:17:13.106005 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:17:13.107057 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:17:13.107116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:17:13.114949 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:17:13.115707 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:17:13.117379 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:17:13.118068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:17:13.118134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:17:13.123040 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:17:13.123167 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:17:13.124627 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:17:13.128912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:17:13.163552 systemd[1]: Switching root. Apr 21 10:17:13.193831 systemd-journald[179]: Journal stopped Apr 21 10:17:14.986388 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 21 10:17:14.986502 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:17:14.986527 kernel: SELinux: policy capability open_perms=1 Apr 21 10:17:14.986547 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:17:14.986568 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:17:14.986602 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:17:14.986622 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:17:14.986644 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:17:14.986662 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:17:14.986681 kernel: audit: type=1403 audit(1776766633.860:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:17:14.986704 systemd[1]: Successfully loaded SELinux policy in 51.153ms. Apr 21 10:17:14.987358 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.660ms. Apr 21 10:17:14.987389 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:17:14.987410 systemd[1]: Detected virtualization amazon. Apr 21 10:17:14.987430 systemd[1]: Detected architecture x86-64. Apr 21 10:17:14.987451 systemd[1]: Detected first boot. Apr 21 10:17:14.987471 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:17:14.987493 zram_generator::config[1472]: No configuration found. Apr 21 10:17:14.987523 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:17:14.987546 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:17:14.987569 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 21 10:17:14.987593 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:17:14.987615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:17:14.987639 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:17:14.987662 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:17:14.987684 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:17:14.987710 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:17:14.987744 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:17:14.987766 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:17:14.987786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:17:14.987806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:17:14.987825 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:17:14.987844 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:17:14.987862 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:17:14.987881 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:17:14.987904 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:17:14.987924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:17:14.987952 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:17:14.987972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:17:14.987992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:17:14.988012 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:17:14.988031 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:17:14.988051 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:17:14.988074 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:17:14.988095 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:17:14.988116 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:17:14.988138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:17:14.988162 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:17:14.988183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:17:14.988202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:17:14.988220 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:17:14.988239 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:17:14.988258 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:17:14.988280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:14.988300 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:17:14.988320 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:17:14.988342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:17:14.988364 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:17:14.988388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:17:14.988409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:17:14.988431 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:17:14.988455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:17:14.988478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:17:14.988498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:17:14.988519 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:17:14.988540 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:17:14.988562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:17:14.988584 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 10:17:14.988605 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 10:17:14.988628 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:17:14.988649 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:17:14.988669 kernel: loop: module loaded Apr 21 10:17:14.988690 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:17:14.988711 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:17:14.989777 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:17:14.989816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:14.989838 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:17:14.989859 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:17:14.989887 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:17:14.989909 kernel: fuse: init (API version 7.39) Apr 21 10:17:14.989931 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:17:14.989995 systemd-journald[1572]: Collecting audit messages is disabled. Apr 21 10:17:14.990039 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:17:14.990058 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:17:14.990078 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:17:14.990098 systemd-journald[1572]: Journal started Apr 21 10:17:14.990140 systemd-journald[1572]: Runtime Journal (/run/log/journal/ec242f3b19d660c3b5c8017b1e6934e8) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:17:14.996624 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:17:15.002502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:17:15.002839 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:17:15.004097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:17:15.004332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:17:15.007638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:17:15.007890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:17:15.008983 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:17:15.009195 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:17:15.009943 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:17:15.010129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:17:15.013626 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:17:15.014820 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:17:15.015905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:17:15.029396 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:17:15.036948 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:17:15.050855 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:17:15.051605 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:17:15.065930 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:17:15.090932 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:17:15.092292 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:17:15.096095 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:17:15.097860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:17:15.101803 kernel: ACPI: bus type drm_connector registered Apr 21 10:17:15.107991 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:17:15.120353 systemd-journald[1572]: Time spent on flushing to /var/log/journal/ec242f3b19d660c3b5c8017b1e6934e8 is 125.603ms for 968 entries. Apr 21 10:17:15.120353 systemd-journald[1572]: System Journal (/var/log/journal/ec242f3b19d660c3b5c8017b1e6934e8) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:17:15.264156 systemd-journald[1572]: Received client request to flush runtime journal. Apr 21 10:17:15.125926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:17:15.134558 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:17:15.141449 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:17:15.142179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:17:15.150803 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:17:15.151633 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:17:15.170032 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:17:15.171982 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:17:15.220357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:17:15.250305 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:17:15.265955 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:17:15.271005 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:17:15.272200 systemd-tmpfiles[1621]: ACLs are not supported, ignoring. Apr 21 10:17:15.272224 systemd-tmpfiles[1621]: ACLs are not supported, ignoring. Apr 21 10:17:15.281268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:17:15.295059 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:17:15.310405 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:17:15.338448 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:17:15.344972 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:17:15.375596 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Apr 21 10:17:15.376090 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Apr 21 10:17:15.382553 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:17:15.885898 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:17:15.895958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:17:15.920529 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Apr 21 10:17:15.982507 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:17:15.991982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:17:16.032963 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:17:16.089256 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 10:17:16.127617 (udev-worker)[1659]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:17:16.146411 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:17:16.155748 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 21 10:17:16.159759 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:17:16.188046 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:17:16.188117 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 21 10:17:16.212812 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:17:16.219764 kernel: ACPI: button: Sleep Button [SLPF] Apr 21 10:17:16.248759 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:17:16.286912 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1654) Apr 21 10:17:16.302544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:17:16.312162 systemd-networkd[1655]: lo: Link UP Apr 21 10:17:16.312758 systemd-networkd[1655]: lo: Gained carrier Apr 21 10:17:16.314884 systemd-networkd[1655]: Enumeration completed Apr 21 10:17:16.316312 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:17:16.316322 systemd-networkd[1655]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:17:16.318612 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:17:16.321101 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:17:16.324127 systemd-networkd[1655]: eth0: Link UP Apr 21 10:17:16.324360 systemd-networkd[1655]: eth0: Gained carrier Apr 21 10:17:16.324387 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:17:16.334826 systemd-networkd[1655]: eth0: DHCPv4 address 172.31.28.88/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:17:16.476778 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:17:16.477991 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:17:16.479718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:17:16.489031 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:17:16.503409 lvm[1776]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:17:16.535212 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:17:16.537217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:17:16.542056 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:17:16.555053 lvm[1779]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:17:16.582134 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:17:16.583964 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:17:16.584673 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:17:16.584714 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:17:16.585541 systemd[1]: Reached target machines.target - Containers. Apr 21 10:17:16.587565 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:17:16.592952 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:17:16.596980 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:17:16.597952 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:17:16.607974 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:17:16.614955 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:17:16.630018 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:17:16.633009 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:17:16.634718 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:17:16.659420 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:17:16.660963 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:17:16.664012 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:17:16.766757 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:17:16.797760 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:17:16.905758 kernel: loop2: detected capacity change from 0 to 228704 Apr 21 10:17:16.958772 kernel: loop3: detected capacity change from 0 to 61336 Apr 21 10:17:17.059762 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:17:17.078981 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:17:17.103438 kernel: loop6: detected capacity change from 0 to 228704 Apr 21 10:17:17.137759 kernel: loop7: detected capacity change from 0 to 61336 Apr 21 10:17:17.151012 (sd-merge)[1801]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 21 10:17:17.151840 (sd-merge)[1801]: Merged extensions into '/usr'. Apr 21 10:17:17.156111 systemd[1]: Reloading requested from client PID 1788 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:17:17.156131 systemd[1]: Reloading... Apr 21 10:17:17.260792 zram_generator::config[1830]: No configuration found. Apr 21 10:17:17.413024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:17.495411 systemd[1]: Reloading finished in 338 ms. Apr 21 10:17:17.528190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:17:17.548537 systemd[1]: Starting ensure-sysext.service... Apr 21 10:17:17.556991 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:17:17.572322 systemd[1]: Reloading requested from client PID 1886 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:17:17.572353 systemd[1]: Reloading... Apr 21 10:17:17.596710 systemd-tmpfiles[1887]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:17:17.599132 systemd-tmpfiles[1887]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:17:17.600782 systemd-tmpfiles[1887]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:17:17.601404 systemd-tmpfiles[1887]: ACLs are not supported, ignoring. Apr 21 10:17:17.601595 systemd-tmpfiles[1887]: ACLs are not supported, ignoring. Apr 21 10:17:17.605694 systemd-tmpfiles[1887]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:17:17.605772 systemd-tmpfiles[1887]: Skipping /boot Apr 21 10:17:17.616610 systemd-tmpfiles[1887]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:17:17.616625 systemd-tmpfiles[1887]: Skipping /boot Apr 21 10:17:17.633046 systemd-networkd[1655]: eth0: Gained IPv6LL Apr 21 10:17:17.662803 ldconfig[1783]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:17:17.687819 zram_generator::config[1916]: No configuration found. Apr 21 10:17:17.827923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:17.903038 systemd[1]: Reloading finished in 330 ms. Apr 21 10:17:17.922587 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:17:17.924126 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:17:17.925258 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:17:17.945940 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:17:17.956039 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:17:17.961361 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:17:17.976346 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:17:17.983995 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:17:18.003173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:18.003520 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:17:18.009294 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:17:18.023474 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:17:18.029187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:17:18.030922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:17:18.031907 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:18.045280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:17:18.045535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:17:18.048896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:17:18.049174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:17:18.064941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:18.065957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:17:18.077421 augenrules[2006]: No rules Apr 21 10:17:18.076825 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:17:18.092078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:17:18.092827 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:17:18.093023 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:18.100210 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:17:18.106550 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:17:18.108712 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:17:18.111324 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:17:18.111551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:17:18.117954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:17:18.118200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:17:18.119398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:17:18.119897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:17:18.134295 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:18.135605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:17:18.142274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:17:18.147929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:17:18.156905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:17:18.172968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:17:18.173681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:17:18.173993 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:17:18.180227 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:17:18.182845 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:17:18.189325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:17:18.189602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:17:18.191576 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:17:18.191844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:17:18.194631 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:17:18.195867 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:17:18.196101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:17:18.199217 systemd[1]: Finished ensure-sysext.service. Apr 21 10:17:18.207583 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:17:18.207646 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:17:18.208849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:17:18.210960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:17:18.218842 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:17:18.234957 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:17:18.244620 systemd-resolved[1985]: Positive Trust Anchors: Apr 21 10:17:18.244748 systemd-resolved[1985]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:17:18.244798 systemd-resolved[1985]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:17:18.250246 systemd-resolved[1985]: Defaulting to hostname 'linux'. Apr 21 10:17:18.253256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:17:18.253847 systemd[1]: Reached target network.target - Network. Apr 21 10:17:18.254276 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:17:18.254662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:17:18.255043 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:17:18.255621 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:17:18.256060 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:17:18.256578 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:17:18.257075 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:17:18.257422 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:17:18.257854 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:17:18.257902 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:17:18.258245 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:17:18.259121 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:17:18.261291 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:17:18.262973 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:17:18.265902 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:17:18.266534 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:17:18.267095 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:17:18.268007 systemd[1]: System is tainted: cgroupsv1 Apr 21 10:17:18.268062 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:17:18.268094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:17:18.271867 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:17:18.275942 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:17:18.281114 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:17:18.293861 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:17:18.298907 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:17:18.306520 jq[2054]: false Apr 21 10:17:18.302821 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:17:18.318471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:18.326045 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:17:18.350919 systemd[1]: Started ntpd.service - Network Time Service. Apr 21 10:17:18.368174 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:17:18.380890 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:17:18.386528 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 21 10:17:18.404175 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:17:18.417120 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:17:18.429664 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:17:18.434575 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:17:18.440750 extend-filesystems[2055]: Found loop4 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found loop5 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found loop6 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found loop7 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p1 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p2 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p3 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found usr Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p4 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p6 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p7 Apr 21 10:17:18.451724 extend-filesystems[2055]: Found nvme0n1p9 Apr 21 10:17:18.451724 extend-filesystems[2055]: Checking size of /dev/nvme0n1p9 Apr 21 10:17:18.454321 dbus-daemon[2053]: [system] SELinux support is enabled Apr 21 10:17:18.451914 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:17:18.465855 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:17:18.472944 dbus-daemon[2053]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1655 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:17:18.474721 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:17:18.476915 jq[2082]: true Apr 21 10:17:18.498422 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:17:18.501820 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:17:18.505755 update_engine[2080]: I20260421 10:17:18.505622 2080 main.cc:92] Flatcar Update Engine starting Apr 21 10:17:18.507305 ntpd[2060]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: ---------------------------------------------------- Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: corporation. Support and training for ntp-4 are Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: available at https://www.nwtime.org/support Apr 21 10:17:18.508344 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: ---------------------------------------------------- Apr 21 10:17:18.508207 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:17:18.507332 ntpd[2060]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:17:18.508566 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:17:18.507344 ntpd[2060]: ---------------------------------------------------- Apr 21 10:17:18.507355 ntpd[2060]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:17:18.507365 ntpd[2060]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:17:18.507376 ntpd[2060]: corporation. Support and training for ntp-4 are Apr 21 10:17:18.507386 ntpd[2060]: available at https://www.nwtime.org/support Apr 21 10:17:18.507396 ntpd[2060]: ---------------------------------------------------- Apr 21 10:17:18.518669 ntpd[2060]: proto: precision = 0.065 usec (-24) Apr 21 10:17:18.521163 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:17:18.522999 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: proto: precision = 0.065 usec (-24) Apr 21 10:17:18.521532 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:17:18.527550 ntpd[2060]: basedate set to 2026-04-09 Apr 21 10:17:18.528534 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: basedate set to 2026-04-09 Apr 21 10:17:18.528534 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: gps base set to 2026-04-12 (week 2414) Apr 21 10:17:18.527580 ntpd[2060]: gps base set to 2026-04-12 (week 2414) Apr 21 10:17:18.557601 update_engine[2080]: I20260421 10:17:18.557533 2080 update_check_scheduler.cc:74] Next update check in 9m36s Apr 21 10:17:18.563100 extend-filesystems[2055]: Resized partition /dev/nvme0n1p9 Apr 21 10:17:18.564497 ntpd[2060]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listen normally on 3 eth0 172.31.28.88:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listen normally on 4 lo [::1]:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listen normally on 5 eth0 [fe80::434:7fff:fe74:65fb%2]:123 Apr 21 10:17:18.565114 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: Listening on routing socket on fd #22 for interface updates Apr 21 10:17:18.564561 ntpd[2060]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:17:18.564780 ntpd[2060]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:17:18.564822 ntpd[2060]: Listen normally on 3 eth0 172.31.28.88:123 Apr 21 10:17:18.564869 ntpd[2060]: Listen normally on 4 lo [::1]:123 Apr 21 10:17:18.564915 ntpd[2060]: Listen normally on 5 eth0 [fe80::434:7fff:fe74:65fb%2]:123 Apr 21 10:17:18.564955 ntpd[2060]: Listening on routing socket on fd #22 for interface updates Apr 21 10:17:18.575844 extend-filesystems[2103]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:17:18.595810 coreos-metadata[2051]: Apr 21 10:17:18.578 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:17:18.595810 coreos-metadata[2051]: Apr 21 10:17:18.582 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 21 10:17:18.595810 coreos-metadata[2051]: Apr 21 10:17:18.586 INFO Fetch successful Apr 21 10:17:18.595810 coreos-metadata[2051]: Apr 21 10:17:18.586 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 21 10:17:18.592319 (ntainerd)[2102]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:17:18.596601 jq[2094]: true Apr 21 10:17:18.609908 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 21 10:17:18.610121 coreos-metadata[2051]: Apr 21 10:17:18.601 INFO Fetch successful Apr 21 10:17:18.610121 coreos-metadata[2051]: Apr 21 10:17:18.601 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 21 10:17:18.610121 coreos-metadata[2051]: Apr 21 10:17:18.604 INFO Fetch successful Apr 21 10:17:18.610121 coreos-metadata[2051]: Apr 21 10:17:18.604 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 21 10:17:18.616612 coreos-metadata[2051]: Apr 21 10:17:18.612 INFO Fetch successful Apr 21 10:17:18.616612 coreos-metadata[2051]: Apr 21 10:17:18.612 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 21 10:17:18.619125 coreos-metadata[2051]: Apr 21 10:17:18.619 INFO Fetch failed with 404: resource not found Apr 21 10:17:18.619125 coreos-metadata[2051]: Apr 21 10:17:18.619 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 21 10:17:18.621268 coreos-metadata[2051]: Apr 21 10:17:18.621 INFO Fetch successful Apr 21 10:17:18.621268 coreos-metadata[2051]: Apr 21 10:17:18.621 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 21 10:17:18.623526 ntpd[2060]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:17:18.624106 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:17:18.624106 ntpd[2060]: 21 Apr 10:17:18 ntpd[2060]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:17:18.623568 ntpd[2060]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.627 INFO Fetch successful Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.627 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.629 INFO Fetch successful Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.629 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.632 INFO Fetch successful Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.632 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 21 10:17:18.657821 coreos-metadata[2051]: Apr 21 10:17:18.637 INFO Fetch successful Apr 21 10:17:18.675781 tar[2091]: linux-amd64/LICENSE Apr 21 10:17:18.675781 tar[2091]: linux-amd64/helm Apr 21 10:17:18.685143 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:17:18.700949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:17:18.709647 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:17:18.728449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:17:18.728500 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:17:18.741083 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:17:18.747344 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:17:18.747391 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:17:18.748683 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:17:18.758986 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1653) Apr 21 10:17:18.766546 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:17:18.768658 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 21 10:17:18.842012 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 21 10:17:18.871552 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:17:18.872964 systemd-logind[2076]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:17:18.872995 systemd-logind[2076]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 21 10:17:18.873020 systemd-logind[2076]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:17:18.881814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:17:18.884087 systemd-logind[2076]: New seat seat0. Apr 21 10:17:18.891293 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:17:18.896755 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 21 10:17:18.953172 extend-filesystems[2103]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 21 10:17:18.953172 extend-filesystems[2103]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 21 10:17:18.953172 extend-filesystems[2103]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 21 10:17:18.939071 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:17:18.993276 bash[2170]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:17:18.993389 extend-filesystems[2055]: Resized filesystem in /dev/nvme0n1p9 Apr 21 10:17:18.939450 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:17:18.960247 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:17:18.989149 systemd[1]: Starting sshkeys.service... Apr 21 10:17:19.066927 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:17:19.078482 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:17:19.242534 amazon-ssm-agent[2157]: Initializing new seelog logger Apr 21 10:17:19.245051 coreos-metadata[2216]: Apr 21 10:17:19.244 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:17:19.245051 coreos-metadata[2216]: Apr 21 10:17:19.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 21 10:17:19.245051 coreos-metadata[2216]: Apr 21 10:17:19.245 INFO Fetch successful Apr 21 10:17:19.245051 coreos-metadata[2216]: Apr 21 10:17:19.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 21 10:17:19.245051 coreos-metadata[2216]: Apr 21 10:17:19.245 INFO Fetch successful Apr 21 10:17:19.247375 amazon-ssm-agent[2157]: New Seelog Logger Creation Complete Apr 21 10:17:19.247479 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.247479 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.250393 unknown[2216]: wrote ssh authorized keys file for user: core Apr 21 10:17:19.251696 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 processing appconfig overrides Apr 21 10:17:19.258792 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.258792 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.258969 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 processing appconfig overrides Apr 21 10:17:19.259335 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.259335 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.259439 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 processing appconfig overrides Apr 21 10:17:19.267111 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO Proxy environment variables: Apr 21 10:17:19.275474 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.276774 amazon-ssm-agent[2157]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:17:19.276774 amazon-ssm-agent[2157]: 2026/04/21 10:17:19 processing appconfig overrides Apr 21 10:17:19.332024 update-ssh-keys[2253]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:17:19.333519 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:17:19.345701 systemd[1]: Finished sshkeys.service. Apr 21 10:17:19.366894 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO https_proxy: Apr 21 10:17:19.454269 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:17:19.457620 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:17:19.458191 dbus-daemon[2053]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2141 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:17:19.470760 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO http_proxy: Apr 21 10:17:19.470761 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:17:19.481288 locksmithd[2143]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:17:19.545119 polkitd[2274]: Started polkitd version 121 Apr 21 10:17:19.575558 polkitd[2274]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:17:19.576836 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO no_proxy: Apr 21 10:17:19.584064 polkitd[2274]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:17:19.587879 polkitd[2274]: Finished loading, compiling and executing 2 rules Apr 21 10:17:19.588600 dbus-daemon[2053]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:17:19.588851 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:17:19.590991 polkitd[2274]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:17:19.661485 systemd-hostnamed[2141]: Hostname set to (transient) Apr 21 10:17:19.661613 systemd-resolved[1985]: System hostname changed to 'ip-172-31-28-88'. Apr 21 10:17:19.681183 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO Checking if agent identity type OnPrem can be assumed Apr 21 10:17:19.760000 containerd[2102]: time="2026-04-21T10:17:19.759874385Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:17:19.781078 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO Checking if agent identity type EC2 can be assumed Apr 21 10:17:19.885070 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO Agent will take identity from EC2 Apr 21 10:17:19.913492 containerd[2102]: time="2026-04-21T10:17:19.913211690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919119529Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919182119Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919219442Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919406503Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919426209Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919493932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:17:19.919588 containerd[2102]: time="2026-04-21T10:17:19.919510804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.921284 containerd[2102]: time="2026-04-21T10:17:19.920789394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:17:19.921284 containerd[2102]: time="2026-04-21T10:17:19.920822378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.921284 containerd[2102]: time="2026-04-21T10:17:19.920847335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:17:19.921284 containerd[2102]: time="2026-04-21T10:17:19.920863707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.921284 containerd[2102]: time="2026-04-21T10:17:19.920980237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.921284 containerd[2102]: time="2026-04-21T10:17:19.921244673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:17:19.925851 containerd[2102]: time="2026-04-21T10:17:19.924146841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:17:19.925851 containerd[2102]: time="2026-04-21T10:17:19.925204275Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:17:19.925851 containerd[2102]: time="2026-04-21T10:17:19.925371198Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:17:19.925851 containerd[2102]: time="2026-04-21T10:17:19.925437032Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:17:19.936754 containerd[2102]: time="2026-04-21T10:17:19.936326842Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:17:19.936754 containerd[2102]: time="2026-04-21T10:17:19.936416978Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:17:19.936754 containerd[2102]: time="2026-04-21T10:17:19.936440645Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:17:19.936754 containerd[2102]: time="2026-04-21T10:17:19.936526429Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:17:19.936754 containerd[2102]: time="2026-04-21T10:17:19.936549939Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:17:19.937306 containerd[2102]: time="2026-04-21T10:17:19.937281192Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.937946118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938141426Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938165319Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938189685Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938231653Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938252457Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938273492Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938295475Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938323389Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938343530Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938363032Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938382312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938409873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939223 containerd[2102]: time="2026-04-21T10:17:19.938427991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938444125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938462961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938481763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938506029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938523816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938543116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938563257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938585026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938603480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938622824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938641241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938665385Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938697486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.939795 containerd[2102]: time="2026-04-21T10:17:19.938716732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943668100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943778833Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943808391Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943827066Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943846350Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943862428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943881781Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943897624Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:17:19.946480 containerd[2102]: time="2026-04-21T10:17:19.943912758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:17:19.946922 containerd[2102]: time="2026-04-21T10:17:19.944358842Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:17:19.946922 containerd[2102]: time="2026-04-21T10:17:19.944450146Z" level=info msg="Connect containerd service" Apr 21 10:17:19.946922 containerd[2102]: time="2026-04-21T10:17:19.944510452Z" level=info msg="using legacy CRI server" Apr 21 10:17:19.946922 containerd[2102]: time="2026-04-21T10:17:19.944521727Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:17:19.946922 containerd[2102]: time="2026-04-21T10:17:19.944653508Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:17:19.952918 containerd[2102]: time="2026-04-21T10:17:19.951223819Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:17:19.954138 containerd[2102]: time="2026-04-21T10:17:19.953473606Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:17:19.954138 containerd[2102]: time="2026-04-21T10:17:19.954095902Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:17:19.954381 containerd[2102]: time="2026-04-21T10:17:19.954316556Z" level=info msg="Start subscribing containerd event" Apr 21 10:17:19.955110 containerd[2102]: time="2026-04-21T10:17:19.955082218Z" level=info msg="Start recovering state" Apr 21 10:17:19.955376 containerd[2102]: time="2026-04-21T10:17:19.955293156Z" level=info msg="Start event monitor" Apr 21 10:17:19.959769 containerd[2102]: time="2026-04-21T10:17:19.957793964Z" level=info msg="Start snapshots syncer" Apr 21 10:17:19.963499 containerd[2102]: time="2026-04-21T10:17:19.963455649Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:17:19.963880 containerd[2102]: time="2026-04-21T10:17:19.963856896Z" level=info msg="Start streaming server" Apr 21 10:17:19.964209 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:17:19.968677 containerd[2102]: time="2026-04-21T10:17:19.966261441Z" level=info msg="containerd successfully booted in 0.212778s" Apr 21 10:17:19.981827 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:17:20.082138 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:17:20.182308 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:17:20.281440 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 21 10:17:20.286039 sshd_keygen[2106]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:17:20.351394 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:17:20.366172 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:17:20.378658 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:17:20.384142 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 21 10:17:20.379034 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:17:20.395516 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:17:20.424308 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:17:20.438462 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:17:20.452658 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:17:20.454997 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:17:20.481679 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] Starting Core Agent Apr 21 10:17:20.506526 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 21 10:17:20.506819 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [Registrar] Starting registrar module Apr 21 10:17:20.506819 amazon-ssm-agent[2157]: 2026-04-21 10:17:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 21 10:17:20.506819 amazon-ssm-agent[2157]: 2026-04-21 10:17:20 INFO [EC2Identity] EC2 registration was successful. Apr 21 10:17:20.506819 amazon-ssm-agent[2157]: 2026-04-21 10:17:20 INFO [CredentialRefresher] credentialRefresher has started Apr 21 10:17:20.506819 amazon-ssm-agent[2157]: 2026-04-21 10:17:20 INFO [CredentialRefresher] Starting credentials refresher loop Apr 21 10:17:20.506819 amazon-ssm-agent[2157]: 2026-04-21 10:17:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 21 10:17:20.580845 amazon-ssm-agent[2157]: 2026-04-21 10:17:20 INFO [CredentialRefresher] Next credential rotation will be in 31.399988852333333 minutes Apr 21 10:17:20.613289 tar[2091]: linux-amd64/README.md Apr 21 10:17:20.630203 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:17:21.007064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:21.007812 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:17:21.009045 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:17:21.009990 systemd[1]: Startup finished in 7.758s (kernel) + 7.196s (userspace) = 14.955s. Apr 21 10:17:21.524302 amazon-ssm-agent[2157]: 2026-04-21 10:17:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 21 10:17:21.626137 amazon-ssm-agent[2157]: 2026-04-21 10:17:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2344) started Apr 21 10:17:21.724844 amazon-ssm-agent[2157]: 2026-04-21 10:17:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 21 10:17:21.783697 kubelet[2333]: E0421 10:17:21.783535 2333 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:17:21.786260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:17:21.786615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:17:22.120308 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:17:22.132753 systemd[1]: Started sshd@0-172.31.28.88:22-50.85.169.122:52100.service - OpenSSH per-connection server daemon (50.85.169.122:52100). Apr 21 10:17:23.160466 sshd[2358]: Accepted publickey for core from 50.85.169.122 port 52100 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:23.162928 sshd[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:23.174836 systemd-logind[2076]: New session 1 of user core. Apr 21 10:17:23.175568 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:17:23.181200 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:17:23.197861 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:17:23.208552 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:17:23.217751 (systemd)[2365]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:17:23.337068 systemd[2365]: Queued start job for default target default.target. Apr 21 10:17:23.337590 systemd[2365]: Created slice app.slice - User Application Slice. Apr 21 10:17:23.337629 systemd[2365]: Reached target paths.target - Paths. Apr 21 10:17:23.337649 systemd[2365]: Reached target timers.target - Timers. Apr 21 10:17:23.342911 systemd[2365]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:17:23.352714 systemd[2365]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:17:23.352831 systemd[2365]: Reached target sockets.target - Sockets. Apr 21 10:17:23.352853 systemd[2365]: Reached target basic.target - Basic System. Apr 21 10:17:23.352912 systemd[2365]: Reached target default.target - Main User Target. Apr 21 10:17:23.352952 systemd[2365]: Startup finished in 127ms. Apr 21 10:17:23.353585 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:17:23.360331 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:17:24.083344 systemd[1]: Started sshd@1-172.31.28.88:22-50.85.169.122:52104.service - OpenSSH per-connection server daemon (50.85.169.122:52104). Apr 21 10:17:25.105490 sshd[2377]: Accepted publickey for core from 50.85.169.122 port 52104 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:25.107334 sshd[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:25.113023 systemd-logind[2076]: New session 2 of user core. Apr 21 10:17:25.124406 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:17:26.586411 systemd-resolved[1985]: Clock change detected. Flushing caches. Apr 21 10:17:26.897810 sshd[2377]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:26.902240 systemd-logind[2076]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:17:26.903737 systemd[1]: sshd@1-172.31.28.88:22-50.85.169.122:52104.service: Deactivated successfully. Apr 21 10:17:26.908255 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:17:26.909351 systemd-logind[2076]: Removed session 2. Apr 21 10:17:27.070418 systemd[1]: Started sshd@2-172.31.28.88:22-50.85.169.122:52112.service - OpenSSH per-connection server daemon (50.85.169.122:52112). Apr 21 10:17:28.093930 sshd[2385]: Accepted publickey for core from 50.85.169.122 port 52112 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:28.098607 sshd[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:28.109222 systemd-logind[2076]: New session 3 of user core. Apr 21 10:17:28.118523 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:17:28.799307 sshd[2385]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:28.804149 systemd[1]: sshd@2-172.31.28.88:22-50.85.169.122:52112.service: Deactivated successfully. Apr 21 10:17:28.808736 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:17:28.809602 systemd-logind[2076]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:17:28.810626 systemd-logind[2076]: Removed session 3. Apr 21 10:17:28.973757 systemd[1]: Started sshd@3-172.31.28.88:22-50.85.169.122:52120.service - OpenSSH per-connection server daemon (50.85.169.122:52120). Apr 21 10:17:29.978873 sshd[2393]: Accepted publickey for core from 50.85.169.122 port 52120 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:29.980410 sshd[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:29.986192 systemd-logind[2076]: New session 4 of user core. Apr 21 10:17:29.992534 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:17:30.680528 sshd[2393]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:30.684849 systemd[1]: sshd@3-172.31.28.88:22-50.85.169.122:52120.service: Deactivated successfully. Apr 21 10:17:30.689715 systemd-logind[2076]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:17:30.690362 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:17:30.692589 systemd-logind[2076]: Removed session 4. Apr 21 10:17:30.840463 systemd[1]: Started sshd@4-172.31.28.88:22-50.85.169.122:39752.service - OpenSSH per-connection server daemon (50.85.169.122:39752). Apr 21 10:17:31.816791 sshd[2401]: Accepted publickey for core from 50.85.169.122 port 39752 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:31.818442 sshd[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:31.834129 systemd-logind[2076]: New session 5 of user core. Apr 21 10:17:31.840569 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:17:32.352031 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:17:32.352468 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:32.364776 sudo[2405]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:32.524355 sshd[2401]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:32.528409 systemd[1]: sshd@4-172.31.28.88:22-50.85.169.122:39752.service: Deactivated successfully. Apr 21 10:17:32.533678 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:17:32.535314 systemd-logind[2076]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:17:32.536461 systemd-logind[2076]: Removed session 5. Apr 21 10:17:32.694422 systemd[1]: Started sshd@5-172.31.28.88:22-50.85.169.122:39756.service - OpenSSH per-connection server daemon (50.85.169.122:39756). Apr 21 10:17:33.115185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:17:33.122710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:33.372283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:33.378877 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:17:33.427845 kubelet[2424]: E0421 10:17:33.427763 2424 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:17:33.432013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:17:33.433105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:17:33.685832 sshd[2410]: Accepted publickey for core from 50.85.169.122 port 39756 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:33.687510 sshd[2410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:33.693171 systemd-logind[2076]: New session 6 of user core. Apr 21 10:17:33.699503 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:17:34.212463 sudo[2436]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:17:34.212868 sudo[2436]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:34.216934 sudo[2436]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:34.222825 sudo[2435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:17:34.223249 sudo[2435]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:34.244457 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:17:34.247079 auditctl[2439]: No rules Apr 21 10:17:34.247453 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:17:34.247720 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:17:34.258571 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:17:34.285317 augenrules[2458]: No rules Apr 21 10:17:34.287234 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:17:34.289977 sudo[2435]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:34.452028 sshd[2410]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:34.456990 systemd-logind[2076]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:17:34.459488 systemd[1]: sshd@5-172.31.28.88:22-50.85.169.122:39756.service: Deactivated successfully. Apr 21 10:17:34.463131 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:17:34.464822 systemd-logind[2076]: Removed session 6. Apr 21 10:17:34.629517 systemd[1]: Started sshd@6-172.31.28.88:22-50.85.169.122:39764.service - OpenSSH per-connection server daemon (50.85.169.122:39764). Apr 21 10:17:35.648321 sshd[2467]: Accepted publickey for core from 50.85.169.122 port 39764 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:17:35.648980 sshd[2467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:35.654787 systemd-logind[2076]: New session 7 of user core. Apr 21 10:17:35.665632 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:17:36.190558 sudo[2471]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:17:36.190959 sudo[2471]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:36.584691 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:17:36.584725 (dockerd)[2486]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:17:36.959987 dockerd[2486]: time="2026-04-21T10:17:36.959784623Z" level=info msg="Starting up" Apr 21 10:17:37.196701 dockerd[2486]: time="2026-04-21T10:17:37.196649372Z" level=info msg="Loading containers: start." Apr 21 10:17:37.319278 kernel: Initializing XFRM netlink socket Apr 21 10:17:37.349860 (udev-worker)[2509]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:17:37.413728 systemd-networkd[1655]: docker0: Link UP Apr 21 10:17:37.433775 dockerd[2486]: time="2026-04-21T10:17:37.433728921Z" level=info msg="Loading containers: done." Apr 21 10:17:37.457998 dockerd[2486]: time="2026-04-21T10:17:37.457942325Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:17:37.458277 dockerd[2486]: time="2026-04-21T10:17:37.458109779Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:17:37.458277 dockerd[2486]: time="2026-04-21T10:17:37.458263351Z" level=info msg="Daemon has completed initialization" Apr 21 10:17:37.508230 dockerd[2486]: time="2026-04-21T10:17:37.507780864Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:17:37.508159 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:17:38.184627 containerd[2102]: time="2026-04-21T10:17:38.184582647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:17:38.879590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70337016.mount: Deactivated successfully. Apr 21 10:17:40.658809 containerd[2102]: time="2026-04-21T10:17:40.658750515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:40.661753 containerd[2102]: time="2026-04-21T10:17:40.661711042Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 21 10:17:40.665500 containerd[2102]: time="2026-04-21T10:17:40.665310991Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:40.669084 containerd[2102]: time="2026-04-21T10:17:40.668838499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:40.670414 containerd[2102]: time="2026-04-21T10:17:40.670124464Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.485492227s" Apr 21 10:17:40.670414 containerd[2102]: time="2026-04-21T10:17:40.670172817Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:17:40.670757 containerd[2102]: time="2026-04-21T10:17:40.670731719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:17:42.632803 containerd[2102]: time="2026-04-21T10:17:42.632749936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:42.635247 containerd[2102]: time="2026-04-21T10:17:42.635142981Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 21 10:17:42.638086 containerd[2102]: time="2026-04-21T10:17:42.637756390Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:42.643811 containerd[2102]: time="2026-04-21T10:17:42.642415322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:42.643811 containerd[2102]: time="2026-04-21T10:17:42.643648687Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.972876306s" Apr 21 10:17:42.643811 containerd[2102]: time="2026-04-21T10:17:42.643695269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:17:42.644781 containerd[2102]: time="2026-04-21T10:17:42.644745644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:17:43.682777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:17:43.692198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:43.955330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:43.958525 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:17:44.027663 kubelet[2702]: E0421 10:17:44.027617 2702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:17:44.030836 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:17:44.031094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:17:44.348531 containerd[2102]: time="2026-04-21T10:17:44.347944574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.350215 containerd[2102]: time="2026-04-21T10:17:44.350148860Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 21 10:17:44.352486 containerd[2102]: time="2026-04-21T10:17:44.352423762Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.356888 containerd[2102]: time="2026-04-21T10:17:44.356808296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.358291 containerd[2102]: time="2026-04-21T10:17:44.358241665Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.713458818s" Apr 21 10:17:44.358769 containerd[2102]: time="2026-04-21T10:17:44.358295572Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:17:44.359566 containerd[2102]: time="2026-04-21T10:17:44.358892688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:17:45.580439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984475441.mount: Deactivated successfully. Apr 21 10:17:46.207416 containerd[2102]: time="2026-04-21T10:17:46.207341949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:46.209606 containerd[2102]: time="2026-04-21T10:17:46.209522350Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 21 10:17:46.212749 containerd[2102]: time="2026-04-21T10:17:46.212200286Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:46.215991 containerd[2102]: time="2026-04-21T10:17:46.215916768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:46.217031 containerd[2102]: time="2026-04-21T10:17:46.216842100Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.857912315s" Apr 21 10:17:46.217031 containerd[2102]: time="2026-04-21T10:17:46.216887270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:17:46.217918 containerd[2102]: time="2026-04-21T10:17:46.217719609Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:17:46.921730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342560820.mount: Deactivated successfully. Apr 21 10:17:48.282890 containerd[2102]: time="2026-04-21T10:17:48.282833850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:48.285082 containerd[2102]: time="2026-04-21T10:17:48.284828518Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 21 10:17:48.287485 containerd[2102]: time="2026-04-21T10:17:48.287415015Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:48.292557 containerd[2102]: time="2026-04-21T10:17:48.292031351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:48.293553 containerd[2102]: time="2026-04-21T10:17:48.293505753Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.075747164s" Apr 21 10:17:48.293651 containerd[2102]: time="2026-04-21T10:17:48.293564591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:17:48.294482 containerd[2102]: time="2026-04-21T10:17:48.294455258Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:17:48.879751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423696675.mount: Deactivated successfully. Apr 21 10:17:48.891518 containerd[2102]: time="2026-04-21T10:17:48.891459093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:48.893550 containerd[2102]: time="2026-04-21T10:17:48.893478405Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 21 10:17:48.895938 containerd[2102]: time="2026-04-21T10:17:48.895877740Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:48.899460 containerd[2102]: time="2026-04-21T10:17:48.899399008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:48.900795 containerd[2102]: time="2026-04-21T10:17:48.900177206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.681475ms" Apr 21 10:17:48.900795 containerd[2102]: time="2026-04-21T10:17:48.900216817Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:17:48.900962 containerd[2102]: time="2026-04-21T10:17:48.900861733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:17:49.496301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803092249.mount: Deactivated successfully. Apr 21 10:17:50.755930 containerd[2102]: time="2026-04-21T10:17:50.755869981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:50.757851 containerd[2102]: time="2026-04-21T10:17:50.757782564Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 21 10:17:50.760225 containerd[2102]: time="2026-04-21T10:17:50.760153508Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:50.764889 containerd[2102]: time="2026-04-21T10:17:50.764610085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:50.766216 containerd[2102]: time="2026-04-21T10:17:50.766020597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.865127439s" Apr 21 10:17:50.766216 containerd[2102]: time="2026-04-21T10:17:50.766097302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:17:50.772897 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:17:54.171734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 10:17:54.178681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:54.199505 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:17:54.199830 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:17:54.200297 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:54.212736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:54.249507 systemd[1]: Reloading requested from client PID 2877 ('systemctl') (unit session-7.scope)... Apr 21 10:17:54.249528 systemd[1]: Reloading... Apr 21 10:17:54.370084 zram_generator::config[2913]: No configuration found. Apr 21 10:17:54.543309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:54.628703 systemd[1]: Reloading finished in 378 ms. Apr 21 10:17:54.671784 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:17:54.671892 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:17:54.672295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:54.676437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:54.935301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:54.949773 (kubelet)[2989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:17:55.000088 kubelet[2989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:55.000088 kubelet[2989]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:17:55.000088 kubelet[2989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:55.000088 kubelet[2989]: I0421 10:17:54.999598 2989 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:17:55.770821 kubelet[2989]: I0421 10:17:55.770770 2989 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:17:55.770821 kubelet[2989]: I0421 10:17:55.770812 2989 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:17:55.771196 kubelet[2989]: I0421 10:17:55.771171 2989 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:17:55.819104 kubelet[2989]: E0421 10:17:55.819008 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:17:55.824651 kubelet[2989]: I0421 10:17:55.824415 2989 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:17:55.828516 kubelet[2989]: E0421 10:17:55.828451 2989 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:17:55.828516 kubelet[2989]: I0421 10:17:55.828492 2989 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:17:55.836556 kubelet[2989]: I0421 10:17:55.836528 2989 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:17:55.842099 kubelet[2989]: I0421 10:17:55.842023 2989 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:17:55.845569 kubelet[2989]: I0421 10:17:55.842101 2989 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:17:55.845569 kubelet[2989]: I0421 10:17:55.845567 2989 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:17:55.845804 kubelet[2989]: I0421 10:17:55.845586 2989 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:17:55.845804 kubelet[2989]: I0421 10:17:55.845762 2989 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:55.852621 kubelet[2989]: I0421 10:17:55.852583 2989 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:17:55.852776 kubelet[2989]: I0421 10:17:55.852631 2989 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:17:55.852776 kubelet[2989]: I0421 10:17:55.852672 2989 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:17:55.852776 kubelet[2989]: I0421 10:17:55.852717 2989 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:17:55.865772 kubelet[2989]: E0421 10:17:55.865725 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-88&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:55.865919 kubelet[2989]: E0421 10:17:55.865857 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:55.865994 kubelet[2989]: I0421 10:17:55.865963 2989 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:17:55.866546 kubelet[2989]: I0421 10:17:55.866514 2989 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:17:55.867662 kubelet[2989]: W0421 10:17:55.867633 2989 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:17:55.876345 kubelet[2989]: I0421 10:17:55.876311 2989 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:17:55.876474 kubelet[2989]: I0421 10:17:55.876392 2989 server.go:1289] "Started kubelet" Apr 21 10:17:55.876665 kubelet[2989]: I0421 10:17:55.876615 2989 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:17:55.879924 kubelet[2989]: I0421 10:17:55.879339 2989 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:17:55.879924 kubelet[2989]: I0421 10:17:55.879754 2989 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:17:55.881730 kubelet[2989]: I0421 10:17:55.881699 2989 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:17:55.884082 kubelet[2989]: I0421 10:17:55.883752 2989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:17:55.888075 kubelet[2989]: E0421 10:17:55.885347 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.88:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.88:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-88.18a857e0c45c4e30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-88,UID:ip-172-31-28-88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-88,},FirstTimestamp:2026-04-21 10:17:55.876335152 +0000 UTC m=+0.921648484,LastTimestamp:2026-04-21 10:17:55.876335152 +0000 UTC m=+0.921648484,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-88,}" Apr 21 10:17:55.888075 kubelet[2989]: I0421 10:17:55.887521 2989 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:17:55.890611 kubelet[2989]: E0421 10:17:55.890589 2989 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-88\" not found" Apr 21 10:17:55.890778 kubelet[2989]: I0421 10:17:55.890766 2989 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:17:55.891133 kubelet[2989]: I0421 10:17:55.891116 2989 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:17:55.891448 kubelet[2989]: I0421 10:17:55.891435 2989 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:17:55.892048 kubelet[2989]: E0421 10:17:55.892022 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:55.894391 kubelet[2989]: I0421 10:17:55.894371 2989 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:17:55.894733 kubelet[2989]: I0421 10:17:55.894713 2989 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:17:55.895364 kubelet[2989]: E0421 10:17:55.895347 2989 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:17:55.896236 kubelet[2989]: E0421 10:17:55.896198 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="200ms" Apr 21 10:17:55.896564 kubelet[2989]: I0421 10:17:55.896548 2989 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:17:55.925403 kubelet[2989]: I0421 10:17:55.924619 2989 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:17:55.928084 kubelet[2989]: I0421 10:17:55.926696 2989 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:17:55.928084 kubelet[2989]: I0421 10:17:55.926732 2989 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:17:55.928084 kubelet[2989]: I0421 10:17:55.926759 2989 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:17:55.928084 kubelet[2989]: I0421 10:17:55.926771 2989 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:17:55.928084 kubelet[2989]: E0421 10:17:55.926819 2989 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:17:55.928851 kubelet[2989]: E0421 10:17:55.928819 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:55.935561 kubelet[2989]: I0421 10:17:55.935536 2989 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:17:55.935725 kubelet[2989]: I0421 10:17:55.935705 2989 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:17:55.935801 kubelet[2989]: I0421 10:17:55.935733 2989 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:55.940199 kubelet[2989]: I0421 10:17:55.940164 2989 policy_none.go:49] "None policy: Start" Apr 21 10:17:55.940199 kubelet[2989]: I0421 10:17:55.940198 2989 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:17:55.940365 kubelet[2989]: I0421 10:17:55.940212 2989 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:17:55.946982 kubelet[2989]: E0421 10:17:55.946944 2989 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:17:55.947190 kubelet[2989]: I0421 10:17:55.947169 2989 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:17:55.947252 kubelet[2989]: I0421 10:17:55.947190 2989 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:17:55.948508 kubelet[2989]: I0421 10:17:55.948478 2989 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:17:55.955097 kubelet[2989]: E0421 10:17:55.953692 2989 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:17:55.955097 kubelet[2989]: E0421 10:17:55.953752 2989 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-88\" not found" Apr 21 10:17:56.036840 kubelet[2989]: E0421 10:17:56.036730 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:56.044083 kubelet[2989]: E0421 10:17:56.044036 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:56.048802 kubelet[2989]: I0421 10:17:56.048723 2989 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Apr 21 10:17:56.051076 kubelet[2989]: E0421 10:17:56.051027 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:56.052082 kubelet[2989]: E0421 10:17:56.051510 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Apr 21 10:17:56.098072 kubelet[2989]: E0421 10:17:56.098013 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="400ms" Apr 21 10:17:56.192545 kubelet[2989]: I0421 10:17:56.192493 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b88bb2a4b7188fc0c9ce1e09a7cacdf7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"b88bb2a4b7188fc0c9ce1e09a7cacdf7\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:17:56.192545 kubelet[2989]: I0421 10:17:56.192543 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:56.192760 kubelet[2989]: I0421 10:17:56.192567 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:56.192760 kubelet[2989]: I0421 10:17:56.192590 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:56.192760 kubelet[2989]: I0421 10:17:56.192612 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/010ae3a85ac5c4cf18921e5f37f70190-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-88\" (UID: \"010ae3a85ac5c4cf18921e5f37f70190\") " pod="kube-system/kube-scheduler-ip-172-31-28-88" Apr 21 10:17:56.192760 kubelet[2989]: I0421 10:17:56.192632 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b88bb2a4b7188fc0c9ce1e09a7cacdf7-ca-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"b88bb2a4b7188fc0c9ce1e09a7cacdf7\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:17:56.192760 kubelet[2989]: I0421 10:17:56.192650 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b88bb2a4b7188fc0c9ce1e09a7cacdf7-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"b88bb2a4b7188fc0c9ce1e09a7cacdf7\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:17:56.192937 kubelet[2989]: I0421 10:17:56.192670 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:56.192937 kubelet[2989]: I0421 10:17:56.192691 2989 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:56.253414 kubelet[2989]: I0421 10:17:56.253342 2989 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Apr 21 10:17:56.253724 kubelet[2989]: E0421 10:17:56.253692 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Apr 21 10:17:56.339722 containerd[2102]: time="2026-04-21T10:17:56.339590864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-88,Uid:b88bb2a4b7188fc0c9ce1e09a7cacdf7,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:56.350591 containerd[2102]: time="2026-04-21T10:17:56.350539312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-88,Uid:3b995ee02ca39745137df1402a209ad2,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:56.352207 containerd[2102]: time="2026-04-21T10:17:56.352169590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-88,Uid:010ae3a85ac5c4cf18921e5f37f70190,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:56.498946 kubelet[2989]: E0421 10:17:56.498896 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="800ms" Apr 21 10:17:56.656576 kubelet[2989]: I0421 10:17:56.656151 2989 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Apr 21 10:17:56.656576 kubelet[2989]: E0421 10:17:56.656474 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Apr 21 10:17:56.855338 kubelet[2989]: E0421 10:17:56.855289 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.88:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-88&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:56.862471 kubelet[2989]: E0421 10:17:56.862426 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.88:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:56.910028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679306545.mount: Deactivated successfully. Apr 21 10:17:56.925921 containerd[2102]: time="2026-04-21T10:17:56.925855812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:56.928016 containerd[2102]: time="2026-04-21T10:17:56.927962832Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:56.929851 containerd[2102]: time="2026-04-21T10:17:56.929772911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 21 10:17:56.932123 containerd[2102]: time="2026-04-21T10:17:56.932074253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:17:56.934293 containerd[2102]: time="2026-04-21T10:17:56.934249863Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:56.936396 containerd[2102]: time="2026-04-21T10:17:56.936345485Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:56.938665 containerd[2102]: time="2026-04-21T10:17:56.938544432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:17:56.942287 containerd[2102]: time="2026-04-21T10:17:56.942223978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:56.943331 containerd[2102]: time="2026-04-21T10:17:56.943122084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 590.873496ms" Apr 21 10:17:56.944796 containerd[2102]: time="2026-04-21T10:17:56.944717819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 605.035183ms" Apr 21 10:17:56.949838 containerd[2102]: time="2026-04-21T10:17:56.949779858Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 599.14682ms" Apr 21 10:17:56.965916 kubelet[2989]: E0421 10:17:56.965870 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.88:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:57.164734 containerd[2102]: time="2026-04-21T10:17:57.164368277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:57.164734 containerd[2102]: time="2026-04-21T10:17:57.164449011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:57.164734 containerd[2102]: time="2026-04-21T10:17:57.164472355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:57.165685 containerd[2102]: time="2026-04-21T10:17:57.164585686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:57.165992 containerd[2102]: time="2026-04-21T10:17:57.164249514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:57.166159 containerd[2102]: time="2026-04-21T10:17:57.166132010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:57.166309 containerd[2102]: time="2026-04-21T10:17:57.166271678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:57.167079 containerd[2102]: time="2026-04-21T10:17:57.166871397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:57.167442 containerd[2102]: time="2026-04-21T10:17:57.167186212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:57.167442 containerd[2102]: time="2026-04-21T10:17:57.167226427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:57.167442 containerd[2102]: time="2026-04-21T10:17:57.167367426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:57.168562 containerd[2102]: time="2026-04-21T10:17:57.168464898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:57.278080 kubelet[2989]: E0421 10:17:57.275643 2989 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.88:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:57.289006 containerd[2102]: time="2026-04-21T10:17:57.288314000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-88,Uid:3b995ee02ca39745137df1402a209ad2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e4aa213774e87021561d7ef13ce48bfef121bd924ce9139ff8908c4b6d3aaff\"" Apr 21 10:17:57.300011 kubelet[2989]: E0421 10:17:57.299868 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": dial tcp 172.31.28.88:6443: connect: connection refused" interval="1.6s" Apr 21 10:17:57.309225 containerd[2102]: time="2026-04-21T10:17:57.309171277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-88,Uid:010ae3a85ac5c4cf18921e5f37f70190,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5ab1ae37157922a2f0fd8b19136a6fe96c08b3bfc2aaada864b61a32e50b0e5\"" Apr 21 10:17:57.317880 containerd[2102]: time="2026-04-21T10:17:57.317555594Z" level=info msg="CreateContainer within sandbox \"4e4aa213774e87021561d7ef13ce48bfef121bd924ce9139ff8908c4b6d3aaff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:17:57.321455 containerd[2102]: time="2026-04-21T10:17:57.321387799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-88,Uid:b88bb2a4b7188fc0c9ce1e09a7cacdf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"057c25d7c986d5735f1e3c7c1bbf9bbf4d5be7ad9ac27db768f1fca432e85e56\"" Apr 21 10:17:57.322048 containerd[2102]: time="2026-04-21T10:17:57.322015703Z" level=info msg="CreateContainer within sandbox \"a5ab1ae37157922a2f0fd8b19136a6fe96c08b3bfc2aaada864b61a32e50b0e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:17:57.329897 containerd[2102]: time="2026-04-21T10:17:57.329861976Z" level=info msg="CreateContainer within sandbox \"057c25d7c986d5735f1e3c7c1bbf9bbf4d5be7ad9ac27db768f1fca432e85e56\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:17:57.364631 containerd[2102]: time="2026-04-21T10:17:57.364557629Z" level=info msg="CreateContainer within sandbox \"a5ab1ae37157922a2f0fd8b19136a6fe96c08b3bfc2aaada864b61a32e50b0e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42\"" Apr 21 10:17:57.365566 containerd[2102]: time="2026-04-21T10:17:57.365530046Z" level=info msg="StartContainer for \"d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42\"" Apr 21 10:17:57.368740 containerd[2102]: time="2026-04-21T10:17:57.368603910Z" level=info msg="CreateContainer within sandbox \"4e4aa213774e87021561d7ef13ce48bfef121bd924ce9139ff8908c4b6d3aaff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0\"" Apr 21 10:17:57.370111 containerd[2102]: time="2026-04-21T10:17:57.369462114Z" level=info msg="StartContainer for \"66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0\"" Apr 21 10:17:57.377352 containerd[2102]: time="2026-04-21T10:17:57.377305243Z" level=info msg="CreateContainer within sandbox \"057c25d7c986d5735f1e3c7c1bbf9bbf4d5be7ad9ac27db768f1fca432e85e56\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7b8d7386c0119db92890759b85b4eb30cf6258f9203e0a5b65024a94aa83584\"" Apr 21 10:17:57.378638 containerd[2102]: time="2026-04-21T10:17:57.378609459Z" level=info msg="StartContainer for \"a7b8d7386c0119db92890759b85b4eb30cf6258f9203e0a5b65024a94aa83584\"" Apr 21 10:17:57.458882 kubelet[2989]: I0421 10:17:57.458783 2989 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Apr 21 10:17:57.459598 kubelet[2989]: E0421 10:17:57.459155 2989 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.88:6443/api/v1/nodes\": dial tcp 172.31.28.88:6443: connect: connection refused" node="ip-172-31-28-88" Apr 21 10:17:57.509121 containerd[2102]: time="2026-04-21T10:17:57.508311129Z" level=info msg="StartContainer for \"66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0\" returns successfully" Apr 21 10:17:57.559107 containerd[2102]: time="2026-04-21T10:17:57.558569427Z" level=info msg="StartContainer for \"a7b8d7386c0119db92890759b85b4eb30cf6258f9203e0a5b65024a94aa83584\" returns successfully" Apr 21 10:17:57.564090 containerd[2102]: time="2026-04-21T10:17:57.563821596Z" level=info msg="StartContainer for \"d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42\" returns successfully" Apr 21 10:17:57.936131 kubelet[2989]: E0421 10:17:57.935825 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:57.943023 kubelet[2989]: E0421 10:17:57.942737 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:57.944252 kubelet[2989]: E0421 10:17:57.944229 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:58.028071 kubelet[2989]: E0421 10:17:58.025861 2989 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.88:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.88:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:17:58.952397 kubelet[2989]: E0421 10:17:58.952363 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:58.961091 kubelet[2989]: E0421 10:17:58.961032 2989 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:59.062426 kubelet[2989]: I0421 10:17:59.062395 2989 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Apr 21 10:17:59.630508 kubelet[2989]: E0421 10:17:59.630464 2989 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-88\" not found" node="ip-172-31-28-88" Apr 21 10:17:59.723913 kubelet[2989]: I0421 10:17:59.723874 2989 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-88" Apr 21 10:17:59.724708 kubelet[2989]: E0421 10:17:59.724129 2989 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-88\": node \"ip-172-31-28-88\" not found" Apr 21 10:17:59.796593 kubelet[2989]: I0421 10:17:59.796541 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:59.805928 kubelet[2989]: E0421 10:17:59.805711 2989 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:17:59.805928 kubelet[2989]: I0421 10:17:59.805748 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-88" Apr 21 10:17:59.808680 kubelet[2989]: E0421 10:17:59.808413 2989 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-88" Apr 21 10:17:59.808680 kubelet[2989]: I0421 10:17:59.808456 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:17:59.811301 kubelet[2989]: E0421 10:17:59.811264 2989 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:17:59.860717 kubelet[2989]: I0421 10:17:59.860669 2989 apiserver.go:52] "Watching apiserver" Apr 21 10:17:59.891793 kubelet[2989]: I0421 10:17:59.891668 2989 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:17:59.946136 kubelet[2989]: I0421 10:17:59.946095 2989 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:17:59.948456 kubelet[2989]: E0421 10:17:59.948424 2989 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:18:02.772441 systemd[1]: Reloading requested from client PID 3279 ('systemctl') (unit session-7.scope)... Apr 21 10:18:02.772463 systemd[1]: Reloading... Apr 21 10:18:03.204097 zram_generator::config[3322]: No configuration found. Apr 21 10:18:03.424806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:18:03.533554 systemd[1]: Reloading finished in 759 ms. Apr 21 10:18:03.579021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:18:03.595861 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:18:03.596313 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:18:03.606508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:18:03.879765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:18:03.895952 (kubelet)[3389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:18:03.973963 kubelet[3389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:18:03.973963 kubelet[3389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:18:03.973963 kubelet[3389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:18:03.973963 kubelet[3389]: I0421 10:18:03.972719 3389 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:18:03.987075 kubelet[3389]: I0421 10:18:03.987000 3389 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:18:03.987075 kubelet[3389]: I0421 10:18:03.987030 3389 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:18:03.994098 kubelet[3389]: I0421 10:18:03.992104 3389 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:18:03.994446 kubelet[3389]: I0421 10:18:03.994427 3389 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:18:04.002279 kubelet[3389]: I0421 10:18:04.002240 3389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:18:04.037103 kubelet[3389]: E0421 10:18:04.033676 3389 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:18:04.037103 kubelet[3389]: I0421 10:18:04.033725 3389 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:18:04.038365 kubelet[3389]: I0421 10:18:04.038315 3389 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:18:04.039124 kubelet[3389]: I0421 10:18:04.039081 3389 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:18:04.040888 kubelet[3389]: I0421 10:18:04.039127 3389 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:18:04.041045 kubelet[3389]: I0421 10:18:04.040904 3389 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:18:04.041045 kubelet[3389]: I0421 10:18:04.040921 3389 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:18:04.041953 sudo[3404]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 21 10:18:04.042883 sudo[3404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 21 10:18:04.045593 kubelet[3389]: I0421 10:18:04.045559 3389 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:18:04.046444 kubelet[3389]: I0421 10:18:04.046302 3389 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:18:04.046444 kubelet[3389]: I0421 10:18:04.046374 3389 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:18:04.051095 kubelet[3389]: I0421 10:18:04.050179 3389 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:18:04.051095 kubelet[3389]: I0421 10:18:04.050219 3389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:18:04.074899 kubelet[3389]: I0421 10:18:04.074851 3389 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:18:04.075881 kubelet[3389]: I0421 10:18:04.075845 3389 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:18:04.085656 kubelet[3389]: I0421 10:18:04.079949 3389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:18:04.085656 kubelet[3389]: I0421 10:18:04.080035 3389 server.go:1289] "Started kubelet" Apr 21 10:18:04.089623 kubelet[3389]: I0421 10:18:04.089541 3389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:18:04.090116 kubelet[3389]: I0421 10:18:04.090098 3389 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:18:04.090418 kubelet[3389]: I0421 10:18:04.090223 3389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:18:04.094705 kubelet[3389]: I0421 10:18:04.094672 3389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:18:04.097623 kubelet[3389]: I0421 10:18:04.097340 3389 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:18:04.105737 kubelet[3389]: I0421 10:18:04.104977 3389 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:18:04.113471 kubelet[3389]: I0421 10:18:04.113365 3389 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:18:04.116187 kubelet[3389]: I0421 10:18:04.115009 3389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:18:04.117093 kubelet[3389]: I0421 10:18:04.116581 3389 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:18:04.128077 kubelet[3389]: I0421 10:18:04.126255 3389 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:18:04.128707 kubelet[3389]: I0421 10:18:04.128676 3389 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:18:04.128707 kubelet[3389]: I0421 10:18:04.128709 3389 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:18:04.129695 kubelet[3389]: I0421 10:18:04.129510 3389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:18:04.129695 kubelet[3389]: I0421 10:18:04.129526 3389 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:18:04.129695 kubelet[3389]: E0421 10:18:04.129587 3389 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:18:04.130249 kubelet[3389]: I0421 10:18:04.129895 3389 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:18:04.130249 kubelet[3389]: I0421 10:18:04.130014 3389 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:18:04.141431 kubelet[3389]: E0421 10:18:04.141321 3389 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:18:04.143807 kubelet[3389]: I0421 10:18:04.143212 3389 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:18:04.229812 kubelet[3389]: E0421 10:18:04.229776 3389 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 10:18:04.245236 kubelet[3389]: I0421 10:18:04.245034 3389 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:18:04.245236 kubelet[3389]: I0421 10:18:04.245063 3389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:18:04.245236 kubelet[3389]: I0421 10:18:04.245138 3389 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:18:04.245470 kubelet[3389]: I0421 10:18:04.245294 3389 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:18:04.245470 kubelet[3389]: I0421 10:18:04.245305 3389 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:18:04.245470 kubelet[3389]: I0421 10:18:04.245327 3389 policy_none.go:49] "None policy: Start" Apr 21 10:18:04.245470 kubelet[3389]: I0421 10:18:04.245340 3389 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:18:04.245470 kubelet[3389]: I0421 10:18:04.245352 3389 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:18:04.245662 kubelet[3389]: I0421 10:18:04.245473 3389 state_mem.go:75] "Updated machine memory state" Apr 21 10:18:04.249358 kubelet[3389]: E0421 10:18:04.249099 3389 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:18:04.249358 kubelet[3389]: I0421 10:18:04.249321 3389 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:18:04.249358 kubelet[3389]: I0421 10:18:04.249339 3389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:18:04.250448 kubelet[3389]: I0421 10:18:04.249772 3389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:18:04.254104 kubelet[3389]: E0421 10:18:04.252623 3389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:18:04.363713 kubelet[3389]: I0421 10:18:04.363268 3389 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-88" Apr 21 10:18:04.372966 kubelet[3389]: I0421 10:18:04.372934 3389 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-88" Apr 21 10:18:04.373138 kubelet[3389]: I0421 10:18:04.373019 3389 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-88" Apr 21 10:18:04.432118 kubelet[3389]: I0421 10:18:04.431984 3389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-88" Apr 21 10:18:04.434101 kubelet[3389]: I0421 10:18:04.433297 3389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:18:04.435018 kubelet[3389]: I0421 10:18:04.434694 3389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:18:04.520111 kubelet[3389]: I0421 10:18:04.520070 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b88bb2a4b7188fc0c9ce1e09a7cacdf7-ca-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"b88bb2a4b7188fc0c9ce1e09a7cacdf7\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:18:04.520111 kubelet[3389]: I0421 10:18:04.520117 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b88bb2a4b7188fc0c9ce1e09a7cacdf7-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"b88bb2a4b7188fc0c9ce1e09a7cacdf7\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:18:04.520305 kubelet[3389]: I0421 10:18:04.520142 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b88bb2a4b7188fc0c9ce1e09a7cacdf7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-88\" (UID: \"b88bb2a4b7188fc0c9ce1e09a7cacdf7\") " pod="kube-system/kube-apiserver-ip-172-31-28-88" Apr 21 10:18:04.520305 kubelet[3389]: I0421 10:18:04.520173 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:18:04.520305 kubelet[3389]: I0421 10:18:04.520192 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:18:04.520305 kubelet[3389]: I0421 10:18:04.520213 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:18:04.520305 kubelet[3389]: I0421 10:18:04.520233 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:18:04.520534 kubelet[3389]: I0421 10:18:04.520256 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b995ee02ca39745137df1402a209ad2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-88\" (UID: \"3b995ee02ca39745137df1402a209ad2\") " pod="kube-system/kube-controller-manager-ip-172-31-28-88" Apr 21 10:18:04.520534 kubelet[3389]: I0421 10:18:04.520283 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/010ae3a85ac5c4cf18921e5f37f70190-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-88\" (UID: \"010ae3a85ac5c4cf18921e5f37f70190\") " pod="kube-system/kube-scheduler-ip-172-31-28-88" Apr 21 10:18:04.802959 sudo[3404]: pam_unix(sudo:session): session closed for user root Apr 21 10:18:05.056128 kubelet[3389]: I0421 10:18:05.055992 3389 apiserver.go:52] "Watching apiserver" Apr 21 10:18:05.116648 kubelet[3389]: I0421 10:18:05.116600 3389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:18:05.134168 update_engine[2080]: I20260421 10:18:05.134094 2080 update_attempter.cc:509] Updating boot flags... Apr 21 10:18:05.257779 kubelet[3389]: I0421 10:18:05.257490 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-88" podStartSLOduration=1.2574717020000001 podStartE2EDuration="1.257471702s" podCreationTimestamp="2026-04-21 10:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:05.232749711 +0000 UTC m=+1.329459878" watchObservedRunningTime="2026-04-21 10:18:05.257471702 +0000 UTC m=+1.354181846" Apr 21 10:18:05.257779 kubelet[3389]: I0421 10:18:05.257609 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-88" podStartSLOduration=1.257601585 podStartE2EDuration="1.257601585s" podCreationTimestamp="2026-04-21 10:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:05.257236904 +0000 UTC m=+1.353947047" watchObservedRunningTime="2026-04-21 10:18:05.257601585 +0000 UTC m=+1.354311728" Apr 21 10:18:05.303132 kubelet[3389]: I0421 10:18:05.302499 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-88" podStartSLOduration=1.3024745260000001 podStartE2EDuration="1.302474526s" podCreationTimestamp="2026-04-21 10:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:05.283232693 +0000 UTC m=+1.379942839" watchObservedRunningTime="2026-04-21 10:18:05.302474526 +0000 UTC m=+1.399184669" Apr 21 10:18:05.328169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3450) Apr 21 10:18:05.642310 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3454) Apr 21 10:18:06.747914 sudo[2471]: pam_unix(sudo:session): session closed for user root Apr 21 10:18:06.914682 sshd[2467]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:06.918482 systemd[1]: sshd@6-172.31.28.88:22-50.85.169.122:39764.service: Deactivated successfully. Apr 21 10:18:06.925007 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:18:06.926368 systemd-logind[2076]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:18:06.927651 systemd-logind[2076]: Removed session 7. Apr 21 10:18:07.953939 kubelet[3389]: I0421 10:18:07.953908 3389 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:18:07.954662 kubelet[3389]: I0421 10:18:07.954580 3389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:18:07.955187 containerd[2102]: time="2026-04-21T10:18:07.954365855Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:18:09.054226 kubelet[3389]: I0421 10:18:09.054184 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2-lib-modules\") pod \"kube-proxy-dhgd9\" (UID: \"c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2\") " pod="kube-system/kube-proxy-dhgd9" Apr 21 10:18:09.054785 kubelet[3389]: I0421 10:18:09.054233 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cni-path\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054785 kubelet[3389]: I0421 10:18:09.054259 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-etc-cni-netd\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054785 kubelet[3389]: I0421 10:18:09.054277 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-xtables-lock\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054785 kubelet[3389]: I0421 10:18:09.054300 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-kernel\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054785 kubelet[3389]: I0421 10:18:09.054323 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-run\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054785 kubelet[3389]: I0421 10:18:09.054350 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-lib-modules\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054960 kubelet[3389]: I0421 10:18:09.054375 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jx9q\" (UniqueName: \"kubernetes.io/projected/c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2-kube-api-access-8jx9q\") pod \"kube-proxy-dhgd9\" (UID: \"c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2\") " pod="kube-system/kube-proxy-dhgd9" Apr 21 10:18:09.054960 kubelet[3389]: I0421 10:18:09.054407 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2-xtables-lock\") pod \"kube-proxy-dhgd9\" (UID: \"c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2\") " pod="kube-system/kube-proxy-dhgd9" Apr 21 10:18:09.054960 kubelet[3389]: I0421 10:18:09.054430 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/800c6a97-3ced-4524-aef0-0980ec19a935-clustermesh-secrets\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054960 kubelet[3389]: I0421 10:18:09.054453 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-config-path\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.054960 kubelet[3389]: I0421 10:18:09.054478 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2-kube-proxy\") pod \"kube-proxy-dhgd9\" (UID: \"c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2\") " pod="kube-system/kube-proxy-dhgd9" Apr 21 10:18:09.055168 kubelet[3389]: I0421 10:18:09.054499 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-bpf-maps\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.055168 kubelet[3389]: I0421 10:18:09.054523 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-hostproc\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.055168 kubelet[3389]: I0421 10:18:09.054545 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-cgroup\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.055168 kubelet[3389]: I0421 10:18:09.054570 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-net\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.055168 kubelet[3389]: I0421 10:18:09.054593 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-hubble-tls\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.055168 kubelet[3389]: I0421 10:18:09.054616 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrhgp\" (UniqueName: \"kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-kube-api-access-rrhgp\") pod \"cilium-h6jwz\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " pod="kube-system/cilium-h6jwz" Apr 21 10:18:09.159089 kubelet[3389]: I0421 10:18:09.155690 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46ld\" (UniqueName: \"kubernetes.io/projected/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-kube-api-access-k46ld\") pod \"cilium-operator-6c4d7847fc-mv7vr\" (UID: \"c8a5b0c0-4977-4663-9e7c-914b9f04c1cf\") " pod="kube-system/cilium-operator-6c4d7847fc-mv7vr" Apr 21 10:18:09.159089 kubelet[3389]: I0421 10:18:09.155773 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mv7vr\" (UID: \"c8a5b0c0-4977-4663-9e7c-914b9f04c1cf\") " pod="kube-system/cilium-operator-6c4d7847fc-mv7vr" Apr 21 10:18:09.327888 containerd[2102]: time="2026-04-21T10:18:09.326984450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhgd9,Uid:c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2,Namespace:kube-system,Attempt:0,}" Apr 21 10:18:09.327888 containerd[2102]: time="2026-04-21T10:18:09.327315855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6jwz,Uid:800c6a97-3ced-4524-aef0-0980ec19a935,Namespace:kube-system,Attempt:0,}" Apr 21 10:18:09.394978 containerd[2102]: time="2026-04-21T10:18:09.394610047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:09.394978 containerd[2102]: time="2026-04-21T10:18:09.394686962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:09.394978 containerd[2102]: time="2026-04-21T10:18:09.394723781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:09.394978 containerd[2102]: time="2026-04-21T10:18:09.394833190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:09.402624 containerd[2102]: time="2026-04-21T10:18:09.402298845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:09.402624 containerd[2102]: time="2026-04-21T10:18:09.402369325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:09.402624 containerd[2102]: time="2026-04-21T10:18:09.402412225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:09.402624 containerd[2102]: time="2026-04-21T10:18:09.402560749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:09.444263 containerd[2102]: time="2026-04-21T10:18:09.443835284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mv7vr,Uid:c8a5b0c0-4977-4663-9e7c-914b9f04c1cf,Namespace:kube-system,Attempt:0,}" Apr 21 10:18:09.467856 containerd[2102]: time="2026-04-21T10:18:09.467801613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dhgd9,Uid:c3724d10-db7c-4f3d-8f4d-5a8cce41f4e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c66f23d536e73e5d22173d5e6d0103db61951a552520fb4365d7f2f505cccc2\"" Apr 21 10:18:09.468019 containerd[2102]: time="2026-04-21T10:18:09.467969717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h6jwz,Uid:800c6a97-3ced-4524-aef0-0980ec19a935,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\"" Apr 21 10:18:09.473499 containerd[2102]: time="2026-04-21T10:18:09.473456575Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 21 10:18:09.488232 containerd[2102]: time="2026-04-21T10:18:09.488027253Z" level=info msg="CreateContainer within sandbox \"2c66f23d536e73e5d22173d5e6d0103db61951a552520fb4365d7f2f505cccc2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:18:09.503688 containerd[2102]: time="2026-04-21T10:18:09.495788859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:09.503688 containerd[2102]: time="2026-04-21T10:18:09.495874513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:09.503688 containerd[2102]: time="2026-04-21T10:18:09.495910723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:09.503688 containerd[2102]: time="2026-04-21T10:18:09.496045567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:09.522516 containerd[2102]: time="2026-04-21T10:18:09.522279819Z" level=info msg="CreateContainer within sandbox \"2c66f23d536e73e5d22173d5e6d0103db61951a552520fb4365d7f2f505cccc2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5ad85064557ecfcf94d4fc4a430997997423d3d16a3ae3cc29b4c724f2e4655\"" Apr 21 10:18:09.523309 containerd[2102]: time="2026-04-21T10:18:09.523251921Z" level=info msg="StartContainer for \"f5ad85064557ecfcf94d4fc4a430997997423d3d16a3ae3cc29b4c724f2e4655\"" Apr 21 10:18:09.605681 containerd[2102]: time="2026-04-21T10:18:09.605557246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mv7vr,Uid:c8a5b0c0-4977-4663-9e7c-914b9f04c1cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\"" Apr 21 10:18:09.623675 containerd[2102]: time="2026-04-21T10:18:09.623623514Z" level=info msg="StartContainer for \"f5ad85064557ecfcf94d4fc4a430997997423d3d16a3ae3cc29b4c724f2e4655\" returns successfully" Apr 21 10:18:10.246754 kubelet[3389]: I0421 10:18:10.246679 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dhgd9" podStartSLOduration=2.246453308 podStartE2EDuration="2.246453308s" podCreationTimestamp="2026-04-21 10:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:10.239110957 +0000 UTC m=+6.335821101" watchObservedRunningTime="2026-04-21 10:18:10.246453308 +0000 UTC m=+6.343163453" Apr 21 10:18:15.350621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119966529.mount: Deactivated successfully. Apr 21 10:18:16.952500 systemd-resolved[1985]: Under memory pressure, flushing caches. Apr 21 10:18:16.952552 systemd-resolved[1985]: Flushed all caches. Apr 21 10:18:16.953084 systemd-journald[1572]: Under memory pressure, flushing caches. Apr 21 10:18:18.080262 containerd[2102]: time="2026-04-21T10:18:18.080202109Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:18.082668 containerd[2102]: time="2026-04-21T10:18:18.082577709Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 21 10:18:18.085185 containerd[2102]: time="2026-04-21T10:18:18.084599676Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:18.086701 containerd[2102]: time="2026-04-21T10:18:18.086651089Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.613145949s" Apr 21 10:18:18.086890 containerd[2102]: time="2026-04-21T10:18:18.086864689Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 21 10:18:18.088783 containerd[2102]: time="2026-04-21T10:18:18.088755176Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 21 10:18:18.094894 containerd[2102]: time="2026-04-21T10:18:18.094834658Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:18:18.214770 containerd[2102]: time="2026-04-21T10:18:18.214717395Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\"" Apr 21 10:18:18.216035 containerd[2102]: time="2026-04-21T10:18:18.215355090Z" level=info msg="StartContainer for \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\"" Apr 21 10:18:18.451742 containerd[2102]: time="2026-04-21T10:18:18.449404288Z" level=info msg="StartContainer for \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\" returns successfully" Apr 21 10:18:18.632923 containerd[2102]: time="2026-04-21T10:18:18.611988912Z" level=info msg="shim disconnected" id=df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77 namespace=k8s.io Apr 21 10:18:18.632923 containerd[2102]: time="2026-04-21T10:18:18.632918865Z" level=warning msg="cleaning up after shim disconnected" id=df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77 namespace=k8s.io Apr 21 10:18:18.632923 containerd[2102]: time="2026-04-21T10:18:18.633035646Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:18:19.208505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77-rootfs.mount: Deactivated successfully. Apr 21 10:18:19.310628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787276015.mount: Deactivated successfully. Apr 21 10:18:19.414187 containerd[2102]: time="2026-04-21T10:18:19.410429301Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:18:19.442039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099394935.mount: Deactivated successfully. Apr 21 10:18:19.458834 containerd[2102]: time="2026-04-21T10:18:19.458786234Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\"" Apr 21 10:18:19.460910 containerd[2102]: time="2026-04-21T10:18:19.460807754Z" level=info msg="StartContainer for \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\"" Apr 21 10:18:19.547136 containerd[2102]: time="2026-04-21T10:18:19.546748074Z" level=info msg="StartContainer for \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\" returns successfully" Apr 21 10:18:19.564428 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:18:19.564849 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:18:19.564941 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:18:19.574981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:18:19.618827 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:18:19.678360 containerd[2102]: time="2026-04-21T10:18:19.678286404Z" level=info msg="shim disconnected" id=faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf namespace=k8s.io Apr 21 10:18:19.678360 containerd[2102]: time="2026-04-21T10:18:19.678354127Z" level=warning msg="cleaning up after shim disconnected" id=faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf namespace=k8s.io Apr 21 10:18:19.678360 containerd[2102]: time="2026-04-21T10:18:19.678366223Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:18:20.321664 containerd[2102]: time="2026-04-21T10:18:20.320655111Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:18:20.377325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578928966.mount: Deactivated successfully. Apr 21 10:18:20.391114 containerd[2102]: time="2026-04-21T10:18:20.389308580Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\"" Apr 21 10:18:20.391114 containerd[2102]: time="2026-04-21T10:18:20.390339746Z" level=info msg="StartContainer for \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\"" Apr 21 10:18:20.492861 containerd[2102]: time="2026-04-21T10:18:20.492806412Z" level=info msg="StartContainer for \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\" returns successfully" Apr 21 10:18:20.574197 containerd[2102]: time="2026-04-21T10:18:20.573629828Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:20.577002 containerd[2102]: time="2026-04-21T10:18:20.576833724Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 21 10:18:20.582078 containerd[2102]: time="2026-04-21T10:18:20.580305584Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:20.582370 containerd[2102]: time="2026-04-21T10:18:20.582330183Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.492797758s" Apr 21 10:18:20.582494 containerd[2102]: time="2026-04-21T10:18:20.582473456Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 21 10:18:20.588451 containerd[2102]: time="2026-04-21T10:18:20.588410828Z" level=info msg="CreateContainer within sandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 21 10:18:20.628238 containerd[2102]: time="2026-04-21T10:18:20.628190661Z" level=info msg="CreateContainer within sandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\"" Apr 21 10:18:20.629145 containerd[2102]: time="2026-04-21T10:18:20.629002718Z" level=info msg="StartContainer for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\"" Apr 21 10:18:20.672417 containerd[2102]: time="2026-04-21T10:18:20.672157184Z" level=info msg="shim disconnected" id=60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f namespace=k8s.io Apr 21 10:18:20.672417 containerd[2102]: time="2026-04-21T10:18:20.672235297Z" level=warning msg="cleaning up after shim disconnected" id=60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f namespace=k8s.io Apr 21 10:18:20.672417 containerd[2102]: time="2026-04-21T10:18:20.672247799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:18:20.705904 containerd[2102]: time="2026-04-21T10:18:20.705635322Z" level=info msg="StartContainer for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" returns successfully" Apr 21 10:18:20.710221 containerd[2102]: time="2026-04-21T10:18:20.709931310Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:18:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:18:21.332521 containerd[2102]: time="2026-04-21T10:18:21.332374549Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:18:21.431643 containerd[2102]: time="2026-04-21T10:18:21.431597814Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\"" Apr 21 10:18:21.438793 containerd[2102]: time="2026-04-21T10:18:21.438750447Z" level=info msg="StartContainer for \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\"" Apr 21 10:18:21.630150 kubelet[3389]: I0421 10:18:21.623942 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mv7vr" podStartSLOduration=1.648644235 podStartE2EDuration="12.623918581s" podCreationTimestamp="2026-04-21 10:18:09 +0000 UTC" firstStartedPulling="2026-04-21 10:18:09.608024585 +0000 UTC m=+5.704734705" lastFinishedPulling="2026-04-21 10:18:20.58329893 +0000 UTC m=+16.680009051" observedRunningTime="2026-04-21 10:18:21.467144371 +0000 UTC m=+17.563854514" watchObservedRunningTime="2026-04-21 10:18:21.623918581 +0000 UTC m=+17.720628723" Apr 21 10:18:21.667786 containerd[2102]: time="2026-04-21T10:18:21.661797322Z" level=info msg="StartContainer for \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\" returns successfully" Apr 21 10:18:21.735580 containerd[2102]: time="2026-04-21T10:18:21.735515887Z" level=info msg="shim disconnected" id=96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047 namespace=k8s.io Apr 21 10:18:21.735580 containerd[2102]: time="2026-04-21T10:18:21.735582614Z" level=warning msg="cleaning up after shim disconnected" id=96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047 namespace=k8s.io Apr 21 10:18:21.735861 containerd[2102]: time="2026-04-21T10:18:21.735593895Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:18:21.762838 containerd[2102]: time="2026-04-21T10:18:21.761964966Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:18:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:18:22.208259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047-rootfs.mount: Deactivated successfully. Apr 21 10:18:22.343473 containerd[2102]: time="2026-04-21T10:18:22.343427987Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:18:22.403067 containerd[2102]: time="2026-04-21T10:18:22.401629411Z" level=info msg="CreateContainer within sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\"" Apr 21 10:18:22.408357 containerd[2102]: time="2026-04-21T10:18:22.406141411Z" level=info msg="StartContainer for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\"" Apr 21 10:18:22.503512 containerd[2102]: time="2026-04-21T10:18:22.503216665Z" level=info msg="StartContainer for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" returns successfully" Apr 21 10:18:22.731119 kubelet[3389]: I0421 10:18:22.731090 3389 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:18:22.885529 kubelet[3389]: I0421 10:18:22.885245 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb58s\" (UniqueName: \"kubernetes.io/projected/8545c15a-f968-4131-80ab-65bc82d80bb8-kube-api-access-mb58s\") pod \"coredns-674b8bbfcf-n9882\" (UID: \"8545c15a-f968-4131-80ab-65bc82d80bb8\") " pod="kube-system/coredns-674b8bbfcf-n9882" Apr 21 10:18:22.885529 kubelet[3389]: I0421 10:18:22.885313 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/337a9d7a-baf7-4679-abd0-f2a81ac0573a-config-volume\") pod \"coredns-674b8bbfcf-248zc\" (UID: \"337a9d7a-baf7-4679-abd0-f2a81ac0573a\") " pod="kube-system/coredns-674b8bbfcf-248zc" Apr 21 10:18:22.885529 kubelet[3389]: I0421 10:18:22.885346 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8545c15a-f968-4131-80ab-65bc82d80bb8-config-volume\") pod \"coredns-674b8bbfcf-n9882\" (UID: \"8545c15a-f968-4131-80ab-65bc82d80bb8\") " pod="kube-system/coredns-674b8bbfcf-n9882" Apr 21 10:18:22.885529 kubelet[3389]: I0421 10:18:22.885376 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8b7\" (UniqueName: \"kubernetes.io/projected/337a9d7a-baf7-4679-abd0-f2a81ac0573a-kube-api-access-dp8b7\") pod \"coredns-674b8bbfcf-248zc\" (UID: \"337a9d7a-baf7-4679-abd0-f2a81ac0573a\") " pod="kube-system/coredns-674b8bbfcf-248zc" Apr 21 10:18:23.104168 containerd[2102]: time="2026-04-21T10:18:23.103151030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-248zc,Uid:337a9d7a-baf7-4679-abd0-f2a81ac0573a,Namespace:kube-system,Attempt:0,}" Apr 21 10:18:23.108638 containerd[2102]: time="2026-04-21T10:18:23.108245737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9882,Uid:8545c15a-f968-4131-80ab-65bc82d80bb8,Namespace:kube-system,Attempt:0,}" Apr 21 10:18:23.403138 kubelet[3389]: I0421 10:18:23.403044 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h6jwz" podStartSLOduration=6.785814531 podStartE2EDuration="15.403021531s" podCreationTimestamp="2026-04-21 10:18:08 +0000 UTC" firstStartedPulling="2026-04-21 10:18:09.470698479 +0000 UTC m=+5.567408601" lastFinishedPulling="2026-04-21 10:18:18.08790548 +0000 UTC m=+14.184615601" observedRunningTime="2026-04-21 10:18:23.40067316 +0000 UTC m=+19.497383304" watchObservedRunningTime="2026-04-21 10:18:23.403021531 +0000 UTC m=+19.499731675" Apr 21 10:18:25.161185 (udev-worker)[4366]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:18:25.161432 systemd-networkd[1655]: cilium_host: Link UP Apr 21 10:18:25.161617 systemd-networkd[1655]: cilium_net: Link UP Apr 21 10:18:25.161622 systemd-networkd[1655]: cilium_net: Gained carrier Apr 21 10:18:25.164272 systemd-networkd[1655]: cilium_host: Gained carrier Apr 21 10:18:25.164621 systemd-networkd[1655]: cilium_host: Gained IPv6LL Apr 21 10:18:25.168395 (udev-worker)[4403]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:18:25.293827 (udev-worker)[4415]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:18:25.302306 systemd-networkd[1655]: cilium_vxlan: Link UP Apr 21 10:18:25.302321 systemd-networkd[1655]: cilium_vxlan: Gained carrier Apr 21 10:18:25.431333 systemd-networkd[1655]: cilium_net: Gained IPv6LL Apr 21 10:18:25.841195 kernel: NET: Registered PF_ALG protocol family Apr 21 10:18:26.487438 systemd-networkd[1655]: cilium_vxlan: Gained IPv6LL Apr 21 10:18:26.613374 systemd-networkd[1655]: lxc_health: Link UP Apr 21 10:18:26.620205 systemd-networkd[1655]: lxc_health: Gained carrier Apr 21 10:18:26.876609 systemd-networkd[1655]: lxc690a1f4ebad4: Link UP Apr 21 10:18:26.885083 kernel: eth0: renamed from tmpb6550 Apr 21 10:18:26.896861 systemd-networkd[1655]: lxc690a1f4ebad4: Gained carrier Apr 21 10:18:27.311466 systemd-networkd[1655]: lxcecb944ce60d6: Link UP Apr 21 10:18:27.318104 kernel: eth0: renamed from tmp9f455 Apr 21 10:18:27.329722 systemd-networkd[1655]: lxcecb944ce60d6: Gained carrier Apr 21 10:18:28.471363 systemd-networkd[1655]: lxcecb944ce60d6: Gained IPv6LL Apr 21 10:18:28.599256 systemd-networkd[1655]: lxc_health: Gained IPv6LL Apr 21 10:18:28.792334 systemd-networkd[1655]: lxc690a1f4ebad4: Gained IPv6LL Apr 21 10:18:31.476191 containerd[2102]: time="2026-04-21T10:18:31.476040751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:31.480023 containerd[2102]: time="2026-04-21T10:18:31.479743597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:31.480023 containerd[2102]: time="2026-04-21T10:18:31.479783427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:31.480023 containerd[2102]: time="2026-04-21T10:18:31.479918728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:31.604677 containerd[2102]: time="2026-04-21T10:18:31.604375364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:31.604677 containerd[2102]: time="2026-04-21T10:18:31.604455502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:31.604677 containerd[2102]: time="2026-04-21T10:18:31.604511528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:31.605160 containerd[2102]: time="2026-04-21T10:18:31.604667861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:31.643914 containerd[2102]: time="2026-04-21T10:18:31.643873564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-248zc,Uid:337a9d7a-baf7-4679-abd0-f2a81ac0573a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f4550d023ba383d25cc2136d6a458167f5e2abdcfdb2e8b8c611037845d2289\"" Apr 21 10:18:31.659938 containerd[2102]: time="2026-04-21T10:18:31.659756501Z" level=info msg="CreateContainer within sandbox \"9f4550d023ba383d25cc2136d6a458167f5e2abdcfdb2e8b8c611037845d2289\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:18:31.735513 containerd[2102]: time="2026-04-21T10:18:31.735306826Z" level=info msg="CreateContainer within sandbox \"9f4550d023ba383d25cc2136d6a458167f5e2abdcfdb2e8b8c611037845d2289\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe569a11a9492deb0a22fc8b8f31b0470b550255e5e96cd2fad7da6d52140d90\"" Apr 21 10:18:31.738122 containerd[2102]: time="2026-04-21T10:18:31.736400146Z" level=info msg="StartContainer for \"fe569a11a9492deb0a22fc8b8f31b0470b550255e5e96cd2fad7da6d52140d90\"" Apr 21 10:18:31.775492 containerd[2102]: time="2026-04-21T10:18:31.775450542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9882,Uid:8545c15a-f968-4131-80ab-65bc82d80bb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b655044acd0e6091952b78eb094356d1a3543cc9cdf5fcf353fe5929be7d2c82\"" Apr 21 10:18:31.787957 containerd[2102]: time="2026-04-21T10:18:31.787901957Z" level=info msg="CreateContainer within sandbox \"b655044acd0e6091952b78eb094356d1a3543cc9cdf5fcf353fe5929be7d2c82\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:18:31.823143 containerd[2102]: time="2026-04-21T10:18:31.822981779Z" level=info msg="CreateContainer within sandbox \"b655044acd0e6091952b78eb094356d1a3543cc9cdf5fcf353fe5929be7d2c82\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0dd3b117bc8b97cbc26d92d467b4ca4a04283e4e67190646245b0953b46964c0\"" Apr 21 10:18:31.826812 containerd[2102]: time="2026-04-21T10:18:31.825172633Z" level=info msg="StartContainer for \"0dd3b117bc8b97cbc26d92d467b4ca4a04283e4e67190646245b0953b46964c0\"" Apr 21 10:18:31.835397 containerd[2102]: time="2026-04-21T10:18:31.835354582Z" level=info msg="StartContainer for \"fe569a11a9492deb0a22fc8b8f31b0470b550255e5e96cd2fad7da6d52140d90\" returns successfully" Apr 21 10:18:31.895603 containerd[2102]: time="2026-04-21T10:18:31.895556276Z" level=info msg="StartContainer for \"0dd3b117bc8b97cbc26d92d467b4ca4a04283e4e67190646245b0953b46964c0\" returns successfully" Apr 21 10:18:32.470078 kubelet[3389]: I0421 10:18:32.467973 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-248zc" podStartSLOduration=23.467949988 podStartE2EDuration="23.467949988s" podCreationTimestamp="2026-04-21 10:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:32.452604784 +0000 UTC m=+28.549314927" watchObservedRunningTime="2026-04-21 10:18:32.467949988 +0000 UTC m=+28.564660168" Apr 21 10:18:32.502201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176298931.mount: Deactivated successfully. Apr 21 10:18:34.196088 kubelet[3389]: I0421 10:18:34.183857 3389 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:18:34.210424 kubelet[3389]: I0421 10:18:34.208213 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n9882" podStartSLOduration=25.20818922 podStartE2EDuration="25.20818922s" podCreationTimestamp="2026-04-21 10:18:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:32.499866047 +0000 UTC m=+28.596576189" watchObservedRunningTime="2026-04-21 10:18:34.20818922 +0000 UTC m=+30.304899363" Apr 21 10:18:34.586232 ntpd[2060]: Listen normally on 6 cilium_host 192.168.0.149:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 6 cilium_host 192.168.0.149:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 7 cilium_net [fe80::948f:6cff:fe99:4afc%4]:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 8 cilium_host [fe80::c0a0:3cff:fe2e:509f%5]:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 9 cilium_vxlan [fe80::30a2:9eff:fe48:70db%6]:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 10 lxc_health [fe80::5406:4bff:fee2:4a58%8]:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 11 lxc690a1f4ebad4 [fe80::2c57:c6ff:fe3a:9d08%10]:123 Apr 21 10:18:34.587385 ntpd[2060]: 21 Apr 10:18:34 ntpd[2060]: Listen normally on 12 lxcecb944ce60d6 [fe80::5408:f9ff:fe9f:3d21%12]:123 Apr 21 10:18:34.586325 ntpd[2060]: Listen normally on 7 cilium_net [fe80::948f:6cff:fe99:4afc%4]:123 Apr 21 10:18:34.586386 ntpd[2060]: Listen normally on 8 cilium_host [fe80::c0a0:3cff:fe2e:509f%5]:123 Apr 21 10:18:34.586429 ntpd[2060]: Listen normally on 9 cilium_vxlan [fe80::30a2:9eff:fe48:70db%6]:123 Apr 21 10:18:34.586471 ntpd[2060]: Listen normally on 10 lxc_health [fe80::5406:4bff:fee2:4a58%8]:123 Apr 21 10:18:34.586512 ntpd[2060]: Listen normally on 11 lxc690a1f4ebad4 [fe80::2c57:c6ff:fe3a:9d08%10]:123 Apr 21 10:18:34.586554 ntpd[2060]: Listen normally on 12 lxcecb944ce60d6 [fe80::5408:f9ff:fe9f:3d21%12]:123 Apr 21 10:18:50.554654 systemd[1]: Started sshd@7-172.31.28.88:22-50.85.169.122:41548.service - OpenSSH per-connection server daemon (50.85.169.122:41548). Apr 21 10:18:51.570525 sshd[4940]: Accepted publickey for core from 50.85.169.122 port 41548 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:51.572040 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:51.580424 systemd-logind[2076]: New session 8 of user core. Apr 21 10:18:51.593998 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:18:52.948845 sshd[4940]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:52.953384 systemd[1]: sshd@7-172.31.28.88:22-50.85.169.122:41548.service: Deactivated successfully. Apr 21 10:18:52.959564 systemd-logind[2076]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:18:52.960113 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:18:52.961603 systemd-logind[2076]: Removed session 8. Apr 21 10:18:58.125518 systemd[1]: Started sshd@8-172.31.28.88:22-50.85.169.122:41560.service - OpenSSH per-connection server daemon (50.85.169.122:41560). Apr 21 10:18:59.122421 sshd[4955]: Accepted publickey for core from 50.85.169.122 port 41560 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:59.123102 sshd[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:59.127843 systemd-logind[2076]: New session 9 of user core. Apr 21 10:18:59.137507 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:18:59.888757 sshd[4955]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:59.892915 systemd-logind[2076]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:18:59.894295 systemd[1]: sshd@8-172.31.28.88:22-50.85.169.122:41560.service: Deactivated successfully. Apr 21 10:18:59.899243 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:18:59.900914 systemd-logind[2076]: Removed session 9. Apr 21 10:19:05.064025 systemd[1]: Started sshd@9-172.31.28.88:22-50.85.169.122:46278.service - OpenSSH per-connection server daemon (50.85.169.122:46278). Apr 21 10:19:06.080368 sshd[4971]: Accepted publickey for core from 50.85.169.122 port 46278 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:06.082137 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:06.086807 systemd-logind[2076]: New session 10 of user core. Apr 21 10:19:06.093578 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:19:06.866193 sshd[4971]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:06.870629 systemd[1]: sshd@9-172.31.28.88:22-50.85.169.122:46278.service: Deactivated successfully. Apr 21 10:19:06.875962 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:19:06.877198 systemd-logind[2076]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:19:06.878268 systemd-logind[2076]: Removed session 10. Apr 21 10:19:12.030831 systemd[1]: Started sshd@10-172.31.28.88:22-50.85.169.122:34486.service - OpenSSH per-connection server daemon (50.85.169.122:34486). Apr 21 10:19:13.021341 sshd[4989]: Accepted publickey for core from 50.85.169.122 port 34486 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:13.022917 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:13.028635 systemd-logind[2076]: New session 11 of user core. Apr 21 10:19:13.032545 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:19:13.773657 sshd[4989]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:13.778129 systemd-logind[2076]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:19:13.779241 systemd[1]: sshd@10-172.31.28.88:22-50.85.169.122:34486.service: Deactivated successfully. Apr 21 10:19:13.784047 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:19:13.785511 systemd-logind[2076]: Removed session 11. Apr 21 10:19:13.952930 systemd[1]: Started sshd@11-172.31.28.88:22-50.85.169.122:34494.service - OpenSSH per-connection server daemon (50.85.169.122:34494). Apr 21 10:19:14.975862 sshd[5005]: Accepted publickey for core from 50.85.169.122 port 34494 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:14.977593 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:14.982802 systemd-logind[2076]: New session 12 of user core. Apr 21 10:19:14.989390 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:19:15.804012 sshd[5005]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:15.807579 systemd[1]: sshd@11-172.31.28.88:22-50.85.169.122:34494.service: Deactivated successfully. Apr 21 10:19:15.812220 systemd-logind[2076]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:19:15.814806 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:19:15.816205 systemd-logind[2076]: Removed session 12. Apr 21 10:19:15.972863 systemd[1]: Started sshd@12-172.31.28.88:22-50.85.169.122:34502.service - OpenSSH per-connection server daemon (50.85.169.122:34502). Apr 21 10:19:16.965039 sshd[5017]: Accepted publickey for core from 50.85.169.122 port 34502 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:16.966618 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:16.972129 systemd-logind[2076]: New session 13 of user core. Apr 21 10:19:16.982027 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:19:17.725706 sshd[5017]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:17.731726 systemd[1]: sshd@12-172.31.28.88:22-50.85.169.122:34502.service: Deactivated successfully. Apr 21 10:19:17.732222 systemd-logind[2076]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:19:17.737181 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:19:17.738513 systemd-logind[2076]: Removed session 13. Apr 21 10:19:22.905853 systemd[1]: Started sshd@13-172.31.28.88:22-50.85.169.122:49986.service - OpenSSH per-connection server daemon (50.85.169.122:49986). Apr 21 10:19:23.931726 sshd[5031]: Accepted publickey for core from 50.85.169.122 port 49986 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:23.932532 sshd[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:23.937764 systemd-logind[2076]: New session 14 of user core. Apr 21 10:19:23.940490 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:19:24.722441 sshd[5031]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:24.729332 systemd[1]: sshd@13-172.31.28.88:22-50.85.169.122:49986.service: Deactivated successfully. Apr 21 10:19:24.733006 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:19:24.733837 systemd-logind[2076]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:19:24.734869 systemd-logind[2076]: Removed session 14. Apr 21 10:19:24.890739 systemd[1]: Started sshd@14-172.31.28.88:22-50.85.169.122:50000.service - OpenSSH per-connection server daemon (50.85.169.122:50000). Apr 21 10:19:25.895760 sshd[5046]: Accepted publickey for core from 50.85.169.122 port 50000 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:25.896548 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:25.901006 systemd-logind[2076]: New session 15 of user core. Apr 21 10:19:25.909422 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:19:27.084862 sshd[5046]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:27.092490 systemd[1]: sshd@14-172.31.28.88:22-50.85.169.122:50000.service: Deactivated successfully. Apr 21 10:19:27.097995 systemd-logind[2076]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:19:27.098578 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:19:27.100559 systemd-logind[2076]: Removed session 15. Apr 21 10:19:27.259411 systemd[1]: Started sshd@15-172.31.28.88:22-50.85.169.122:50010.service - OpenSSH per-connection server daemon (50.85.169.122:50010). Apr 21 10:19:28.270200 sshd[5057]: Accepted publickey for core from 50.85.169.122 port 50010 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:28.270899 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:28.276469 systemd-logind[2076]: New session 16 of user core. Apr 21 10:19:28.284708 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:19:29.535569 sshd[5057]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:29.541228 systemd[1]: sshd@15-172.31.28.88:22-50.85.169.122:50010.service: Deactivated successfully. Apr 21 10:19:29.541951 systemd-logind[2076]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:19:29.546840 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:19:29.548749 systemd-logind[2076]: Removed session 16. Apr 21 10:19:29.698420 systemd[1]: Started sshd@16-172.31.28.88:22-50.85.169.122:43290.service - OpenSSH per-connection server daemon (50.85.169.122:43290). Apr 21 10:19:30.721356 sshd[5076]: Accepted publickey for core from 50.85.169.122 port 43290 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:30.722950 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:30.727931 systemd-logind[2076]: New session 17 of user core. Apr 21 10:19:30.737509 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:19:31.648481 sshd[5076]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:31.654013 systemd[1]: sshd@16-172.31.28.88:22-50.85.169.122:43290.service: Deactivated successfully. Apr 21 10:19:31.654131 systemd-logind[2076]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:19:31.659807 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:19:31.660954 systemd-logind[2076]: Removed session 17. Apr 21 10:19:31.822913 systemd[1]: Started sshd@17-172.31.28.88:22-50.85.169.122:43300.service - OpenSSH per-connection server daemon (50.85.169.122:43300). Apr 21 10:19:32.834162 sshd[5088]: Accepted publickey for core from 50.85.169.122 port 43300 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:32.835804 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:32.841175 systemd-logind[2076]: New session 18 of user core. Apr 21 10:19:32.847515 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:19:33.605723 sshd[5088]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:33.612611 systemd[1]: sshd@17-172.31.28.88:22-50.85.169.122:43300.service: Deactivated successfully. Apr 21 10:19:33.616041 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:19:33.617453 systemd-logind[2076]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:19:33.618690 systemd-logind[2076]: Removed session 18. Apr 21 10:19:38.782315 systemd[1]: Started sshd@18-172.31.28.88:22-50.85.169.122:43304.service - OpenSSH per-connection server daemon (50.85.169.122:43304). Apr 21 10:19:39.813237 sshd[5104]: Accepted publickey for core from 50.85.169.122 port 43304 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:39.814904 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:39.819770 systemd-logind[2076]: New session 19 of user core. Apr 21 10:19:39.826414 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:19:40.602003 sshd[5104]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:40.606639 systemd[1]: sshd@18-172.31.28.88:22-50.85.169.122:43304.service: Deactivated successfully. Apr 21 10:19:40.611544 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:19:40.612580 systemd-logind[2076]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:19:40.613683 systemd-logind[2076]: Removed session 19. Apr 21 10:19:45.774696 systemd[1]: Started sshd@19-172.31.28.88:22-50.85.169.122:38460.service - OpenSSH per-connection server daemon (50.85.169.122:38460). Apr 21 10:19:46.781679 sshd[5119]: Accepted publickey for core from 50.85.169.122 port 38460 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:46.783410 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:46.788225 systemd-logind[2076]: New session 20 of user core. Apr 21 10:19:46.793825 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:19:47.555572 sshd[5119]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:47.560380 systemd-logind[2076]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:19:47.563383 systemd[1]: sshd@19-172.31.28.88:22-50.85.169.122:38460.service: Deactivated successfully. Apr 21 10:19:47.567339 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:19:47.569071 systemd-logind[2076]: Removed session 20. Apr 21 10:19:47.721881 systemd[1]: Started sshd@20-172.31.28.88:22-50.85.169.122:38472.service - OpenSSH per-connection server daemon (50.85.169.122:38472). Apr 21 10:19:48.717102 sshd[5133]: Accepted publickey for core from 50.85.169.122 port 38472 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:48.718140 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:48.723549 systemd-logind[2076]: New session 21 of user core. Apr 21 10:19:48.728570 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:19:50.631278 containerd[2102]: time="2026-04-21T10:19:50.631231122Z" level=info msg="StopContainer for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" with timeout 30 (s)" Apr 21 10:19:50.634384 containerd[2102]: time="2026-04-21T10:19:50.634350074Z" level=info msg="Stop container \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" with signal terminated" Apr 21 10:19:50.723888 containerd[2102]: time="2026-04-21T10:19:50.723298809Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:19:50.728555 containerd[2102]: time="2026-04-21T10:19:50.728344886Z" level=info msg="StopContainer for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" with timeout 2 (s)" Apr 21 10:19:50.729000 containerd[2102]: time="2026-04-21T10:19:50.728898657Z" level=info msg="Stop container \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" with signal terminated" Apr 21 10:19:50.746023 systemd-networkd[1655]: lxc_health: Link DOWN Apr 21 10:19:50.746035 systemd-networkd[1655]: lxc_health: Lost carrier Apr 21 10:19:50.748890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79-rootfs.mount: Deactivated successfully. Apr 21 10:19:50.768220 containerd[2102]: time="2026-04-21T10:19:50.768141252Z" level=info msg="shim disconnected" id=659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79 namespace=k8s.io Apr 21 10:19:50.768220 containerd[2102]: time="2026-04-21T10:19:50.768214219Z" level=warning msg="cleaning up after shim disconnected" id=659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79 namespace=k8s.io Apr 21 10:19:50.768620 containerd[2102]: time="2026-04-21T10:19:50.768230962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:50.797032 containerd[2102]: time="2026-04-21T10:19:50.796978174Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:19:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:19:50.803734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416-rootfs.mount: Deactivated successfully. Apr 21 10:19:50.808617 containerd[2102]: time="2026-04-21T10:19:50.808571022Z" level=info msg="StopContainer for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" returns successfully" Apr 21 10:19:50.811042 containerd[2102]: time="2026-04-21T10:19:50.809276447Z" level=info msg="StopPodSandbox for \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\"" Apr 21 10:19:50.811042 containerd[2102]: time="2026-04-21T10:19:50.809328582Z" level=info msg="Container to stop \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:19:50.815472 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5-shm.mount: Deactivated successfully. Apr 21 10:19:50.822219 containerd[2102]: time="2026-04-21T10:19:50.822152491Z" level=info msg="shim disconnected" id=e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416 namespace=k8s.io Apr 21 10:19:50.822219 containerd[2102]: time="2026-04-21T10:19:50.822220006Z" level=warning msg="cleaning up after shim disconnected" id=e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416 namespace=k8s.io Apr 21 10:19:50.822219 containerd[2102]: time="2026-04-21T10:19:50.822231480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:50.854470 containerd[2102]: time="2026-04-21T10:19:50.854425660Z" level=info msg="StopContainer for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" returns successfully" Apr 21 10:19:50.865111 containerd[2102]: time="2026-04-21T10:19:50.865019984Z" level=info msg="StopPodSandbox for \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\"" Apr 21 10:19:50.865111 containerd[2102]: time="2026-04-21T10:19:50.865095376Z" level=info msg="Container to stop \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:19:50.865111 containerd[2102]: time="2026-04-21T10:19:50.865114317Z" level=info msg="Container to stop \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:19:50.866879 containerd[2102]: time="2026-04-21T10:19:50.865127971Z" level=info msg="Container to stop \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:19:50.866879 containerd[2102]: time="2026-04-21T10:19:50.865142227Z" level=info msg="Container to stop \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:19:50.866879 containerd[2102]: time="2026-04-21T10:19:50.865154983Z" level=info msg="Container to stop \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:19:50.874808 containerd[2102]: time="2026-04-21T10:19:50.874650751Z" level=info msg="shim disconnected" id=12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5 namespace=k8s.io Apr 21 10:19:50.874808 containerd[2102]: time="2026-04-21T10:19:50.874708513Z" level=warning msg="cleaning up after shim disconnected" id=12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5 namespace=k8s.io Apr 21 10:19:50.874808 containerd[2102]: time="2026-04-21T10:19:50.874720597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:50.911204 containerd[2102]: time="2026-04-21T10:19:50.909690278Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:19:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:19:50.911707 containerd[2102]: time="2026-04-21T10:19:50.911668448Z" level=info msg="TearDown network for sandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" successfully" Apr 21 10:19:50.911826 containerd[2102]: time="2026-04-21T10:19:50.911748491Z" level=info msg="StopPodSandbox for \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" returns successfully" Apr 21 10:19:50.931950 containerd[2102]: time="2026-04-21T10:19:50.931889516Z" level=info msg="shim disconnected" id=7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20 namespace=k8s.io Apr 21 10:19:50.931950 containerd[2102]: time="2026-04-21T10:19:50.931941930Z" level=warning msg="cleaning up after shim disconnected" id=7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20 namespace=k8s.io Apr 21 10:19:50.931950 containerd[2102]: time="2026-04-21T10:19:50.931952654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:50.951897 containerd[2102]: time="2026-04-21T10:19:50.951792716Z" level=info msg="TearDown network for sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" successfully" Apr 21 10:19:50.951897 containerd[2102]: time="2026-04-21T10:19:50.951832889Z" level=info msg="StopPodSandbox for \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" returns successfully" Apr 21 10:19:51.020583 kubelet[3389]: I0421 10:19:51.020525 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cni-path\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.020583 kubelet[3389]: I0421 10:19:51.020591 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-cilium-config-path\") pod \"c8a5b0c0-4977-4663-9e7c-914b9f04c1cf\" (UID: \"c8a5b0c0-4977-4663-9e7c-914b9f04c1cf\") " Apr 21 10:19:51.021375 kubelet[3389]: I0421 10:19:51.020621 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-config-path\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021375 kubelet[3389]: I0421 10:19:51.020640 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-hostproc\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021375 kubelet[3389]: I0421 10:19:51.020662 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-hubble-tls\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021375 kubelet[3389]: I0421 10:19:51.020682 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-net\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021375 kubelet[3389]: I0421 10:19:51.020708 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-etc-cni-netd\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021375 kubelet[3389]: I0421 10:19:51.020732 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-xtables-lock\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021566 kubelet[3389]: I0421 10:19:51.020758 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-run\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021566 kubelet[3389]: I0421 10:19:51.020780 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-lib-modules\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021566 kubelet[3389]: I0421 10:19:51.020801 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-bpf-maps\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021566 kubelet[3389]: I0421 10:19:51.020829 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrhgp\" (UniqueName: \"kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-kube-api-access-rrhgp\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021566 kubelet[3389]: I0421 10:19:51.020857 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/800c6a97-3ced-4524-aef0-0980ec19a935-clustermesh-secrets\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021566 kubelet[3389]: I0421 10:19:51.020883 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-cgroup\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021756 kubelet[3389]: I0421 10:19:51.020907 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-kernel\") pod \"800c6a97-3ced-4524-aef0-0980ec19a935\" (UID: \"800c6a97-3ced-4524-aef0-0980ec19a935\") " Apr 21 10:19:51.021756 kubelet[3389]: I0421 10:19:51.020934 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k46ld\" (UniqueName: \"kubernetes.io/projected/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-kube-api-access-k46ld\") pod \"c8a5b0c0-4977-4663-9e7c-914b9f04c1cf\" (UID: \"c8a5b0c0-4977-4663-9e7c-914b9f04c1cf\") " Apr 21 10:19:51.039076 kubelet[3389]: I0421 10:19:51.036753 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cni-path" (OuterVolumeSpecName: "cni-path") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.043151 kubelet[3389]: I0421 10:19:51.041632 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.043151 kubelet[3389]: I0421 10:19:51.041712 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.043151 kubelet[3389]: I0421 10:19:51.041742 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.044188 kubelet[3389]: I0421 10:19:51.044143 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c8a5b0c0-4977-4663-9e7c-914b9f04c1cf" (UID: "c8a5b0c0-4977-4663-9e7c-914b9f04c1cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:19:51.046556 kubelet[3389]: I0421 10:19:51.046515 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-kube-api-access-rrhgp" (OuterVolumeSpecName: "kube-api-access-rrhgp") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "kube-api-access-rrhgp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:19:51.048996 kubelet[3389]: I0421 10:19:51.048956 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:19:51.049200 kubelet[3389]: I0421 10:19:51.049178 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-hostproc" (OuterVolumeSpecName: "hostproc") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.049811 kubelet[3389]: I0421 10:19:51.049780 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800c6a97-3ced-4524-aef0-0980ec19a935-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:19:51.049897 kubelet[3389]: I0421 10:19:51.049833 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.049897 kubelet[3389]: I0421 10:19:51.049855 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.049897 kubelet[3389]: I0421 10:19:51.033144 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.052916 kubelet[3389]: I0421 10:19:51.052885 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:19:51.053040 kubelet[3389]: I0421 10:19:51.052894 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.053144 kubelet[3389]: I0421 10:19:51.052912 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "800c6a97-3ced-4524-aef0-0980ec19a935" (UID: "800c6a97-3ced-4524-aef0-0980ec19a935"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 21 10:19:51.053200 kubelet[3389]: I0421 10:19:51.052928 3389 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-kube-api-access-k46ld" (OuterVolumeSpecName: "kube-api-access-k46ld") pod "c8a5b0c0-4977-4663-9e7c-914b9f04c1cf" (UID: "c8a5b0c0-4977-4663-9e7c-914b9f04c1cf"). InnerVolumeSpecName "kube-api-access-k46ld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:19:51.121964 kubelet[3389]: I0421 10:19:51.121915 3389 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-lib-modules\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.121964 kubelet[3389]: I0421 10:19:51.121955 3389 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-bpf-maps\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.121964 kubelet[3389]: I0421 10:19:51.121970 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrhgp\" (UniqueName: \"kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-kube-api-access-rrhgp\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.121983 3389 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/800c6a97-3ced-4524-aef0-0980ec19a935-clustermesh-secrets\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.121997 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-cgroup\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.122008 3389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-kernel\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.122020 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k46ld\" (UniqueName: \"kubernetes.io/projected/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-kube-api-access-k46ld\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.122031 3389 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cni-path\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.122043 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf-cilium-config-path\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.122077 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-config-path\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122239 kubelet[3389]: I0421 10:19:51.122092 3389 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-hostproc\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122468 kubelet[3389]: I0421 10:19:51.122105 3389 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/800c6a97-3ced-4524-aef0-0980ec19a935-hubble-tls\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122468 kubelet[3389]: I0421 10:19:51.122118 3389 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-host-proc-sys-net\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122468 kubelet[3389]: I0421 10:19:51.122129 3389 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-etc-cni-netd\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122468 kubelet[3389]: I0421 10:19:51.122141 3389 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-xtables-lock\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.122468 kubelet[3389]: I0421 10:19:51.122152 3389 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/800c6a97-3ced-4524-aef0-0980ec19a935-cilium-run\") on node \"ip-172-31-28-88\" DevicePath \"\"" Apr 21 10:19:51.673518 kubelet[3389]: I0421 10:19:51.673434 3389 scope.go:117] "RemoveContainer" containerID="659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79" Apr 21 10:19:51.694154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5-rootfs.mount: Deactivated successfully. Apr 21 10:19:51.694365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20-rootfs.mount: Deactivated successfully. Apr 21 10:19:51.694504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20-shm.mount: Deactivated successfully. Apr 21 10:19:51.694651 systemd[1]: var-lib-kubelet-pods-c8a5b0c0\x2d4977\x2d4663\x2d9e7c\x2d914b9f04c1cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk46ld.mount: Deactivated successfully. Apr 21 10:19:51.694784 systemd[1]: var-lib-kubelet-pods-800c6a97\x2d3ced\x2d4524\x2daef0\x2d0980ec19a935-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drrhgp.mount: Deactivated successfully. Apr 21 10:19:51.694927 systemd[1]: var-lib-kubelet-pods-800c6a97\x2d3ced\x2d4524\x2daef0\x2d0980ec19a935-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 21 10:19:51.696174 systemd[1]: var-lib-kubelet-pods-800c6a97\x2d3ced\x2d4524\x2daef0\x2d0980ec19a935-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 21 10:19:51.737377 containerd[2102]: time="2026-04-21T10:19:51.737204137Z" level=info msg="RemoveContainer for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\"" Apr 21 10:19:51.755307 containerd[2102]: time="2026-04-21T10:19:51.755142630Z" level=info msg="RemoveContainer for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" returns successfully" Apr 21 10:19:51.767048 kubelet[3389]: I0421 10:19:51.766987 3389 scope.go:117] "RemoveContainer" containerID="659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79" Apr 21 10:19:51.796564 containerd[2102]: time="2026-04-21T10:19:51.772821018Z" level=error msg="ContainerStatus for \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\": not found" Apr 21 10:19:51.808467 kubelet[3389]: E0421 10:19:51.808351 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\": not found" containerID="659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79" Apr 21 10:19:51.827823 kubelet[3389]: I0421 10:19:51.810752 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79"} err="failed to get container status \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\": rpc error: code = NotFound desc = an error occurred when try to find container \"659ac91ec4e67d4a59d0b3d902622d219fc6b25dce1ba86f139022766ca92b79\": not found" Apr 21 10:19:51.827823 kubelet[3389]: I0421 10:19:51.827827 3389 scope.go:117] "RemoveContainer" containerID="e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416" Apr 21 10:19:51.829571 containerd[2102]: time="2026-04-21T10:19:51.829536421Z" level=info msg="RemoveContainer for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\"" Apr 21 10:19:51.835150 containerd[2102]: time="2026-04-21T10:19:51.835106734Z" level=info msg="RemoveContainer for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" returns successfully" Apr 21 10:19:51.835543 kubelet[3389]: I0421 10:19:51.835482 3389 scope.go:117] "RemoveContainer" containerID="96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047" Apr 21 10:19:51.836933 containerd[2102]: time="2026-04-21T10:19:51.836893144Z" level=info msg="RemoveContainer for \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\"" Apr 21 10:19:51.842369 containerd[2102]: time="2026-04-21T10:19:51.842327432Z" level=info msg="RemoveContainer for \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\" returns successfully" Apr 21 10:19:51.842725 kubelet[3389]: I0421 10:19:51.842614 3389 scope.go:117] "RemoveContainer" containerID="60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f" Apr 21 10:19:51.843831 containerd[2102]: time="2026-04-21T10:19:51.843796169Z" level=info msg="RemoveContainer for \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\"" Apr 21 10:19:51.849473 containerd[2102]: time="2026-04-21T10:19:51.849419702Z" level=info msg="RemoveContainer for \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\" returns successfully" Apr 21 10:19:51.849788 kubelet[3389]: I0421 10:19:51.849757 3389 scope.go:117] "RemoveContainer" containerID="faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf" Apr 21 10:19:51.850997 containerd[2102]: time="2026-04-21T10:19:51.850958547Z" level=info msg="RemoveContainer for \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\"" Apr 21 10:19:51.856530 containerd[2102]: time="2026-04-21T10:19:51.856464828Z" level=info msg="RemoveContainer for \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\" returns successfully" Apr 21 10:19:51.856871 kubelet[3389]: I0421 10:19:51.856843 3389 scope.go:117] "RemoveContainer" containerID="df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77" Apr 21 10:19:51.858149 containerd[2102]: time="2026-04-21T10:19:51.858116520Z" level=info msg="RemoveContainer for \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\"" Apr 21 10:19:51.863397 containerd[2102]: time="2026-04-21T10:19:51.863354284Z" level=info msg="RemoveContainer for \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\" returns successfully" Apr 21 10:19:51.863671 kubelet[3389]: I0421 10:19:51.863641 3389 scope.go:117] "RemoveContainer" containerID="e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416" Apr 21 10:19:51.863939 containerd[2102]: time="2026-04-21T10:19:51.863881077Z" level=error msg="ContainerStatus for \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\": not found" Apr 21 10:19:51.864252 kubelet[3389]: E0421 10:19:51.864216 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\": not found" containerID="e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416" Apr 21 10:19:51.864325 kubelet[3389]: I0421 10:19:51.864255 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416"} err="failed to get container status \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0577e15ac27cc169b508cf8ff6872fe882d212bd5bfab59323f39936ad33416\": not found" Apr 21 10:19:51.864325 kubelet[3389]: I0421 10:19:51.864282 3389 scope.go:117] "RemoveContainer" containerID="96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047" Apr 21 10:19:51.864672 containerd[2102]: time="2026-04-21T10:19:51.864632089Z" level=error msg="ContainerStatus for \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\": not found" Apr 21 10:19:51.864820 kubelet[3389]: E0421 10:19:51.864794 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\": not found" containerID="96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047" Apr 21 10:19:51.864905 kubelet[3389]: I0421 10:19:51.864825 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047"} err="failed to get container status \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\": rpc error: code = NotFound desc = an error occurred when try to find container \"96ddb19c93a24730663e3813c0a61168dcf90b8e14ec6883912d85d911abb047\": not found" Apr 21 10:19:51.864905 kubelet[3389]: I0421 10:19:51.864849 3389 scope.go:117] "RemoveContainer" containerID="60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f" Apr 21 10:19:51.865126 containerd[2102]: time="2026-04-21T10:19:51.865091634Z" level=error msg="ContainerStatus for \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\": not found" Apr 21 10:19:51.865269 kubelet[3389]: E0421 10:19:51.865231 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\": not found" containerID="60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f" Apr 21 10:19:51.865350 kubelet[3389]: I0421 10:19:51.865272 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f"} err="failed to get container status \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\": rpc error: code = NotFound desc = an error occurred when try to find container \"60256dffcdea41f391587c5dca3c90d04a8c70bf5529c54cf5e36a0a33f6810f\": not found" Apr 21 10:19:51.865350 kubelet[3389]: I0421 10:19:51.865292 3389 scope.go:117] "RemoveContainer" containerID="faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf" Apr 21 10:19:51.865524 containerd[2102]: time="2026-04-21T10:19:51.865459685Z" level=error msg="ContainerStatus for \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\": not found" Apr 21 10:19:51.865620 kubelet[3389]: E0421 10:19:51.865587 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\": not found" containerID="faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf" Apr 21 10:19:51.865707 kubelet[3389]: I0421 10:19:51.865630 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf"} err="failed to get container status \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"faa536c62cf7e7b1aa057e2ff6b492005432fc13f38f21defba7f3e1e98600bf\": not found" Apr 21 10:19:51.865707 kubelet[3389]: I0421 10:19:51.865663 3389 scope.go:117] "RemoveContainer" containerID="df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77" Apr 21 10:19:51.865986 containerd[2102]: time="2026-04-21T10:19:51.865955545Z" level=error msg="ContainerStatus for \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\": not found" Apr 21 10:19:51.866143 kubelet[3389]: E0421 10:19:51.866117 3389 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\": not found" containerID="df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77" Apr 21 10:19:51.866216 kubelet[3389]: I0421 10:19:51.866148 3389 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77"} err="failed to get container status \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\": rpc error: code = NotFound desc = an error occurred when try to find container \"df2157902d8ae987bc1f2594d9dbdcb1e7da15eb97b1f312f0bde7ed55691a77\": not found" Apr 21 10:19:52.132887 kubelet[3389]: I0421 10:19:52.132840 3389 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800c6a97-3ced-4524-aef0-0980ec19a935" path="/var/lib/kubelet/pods/800c6a97-3ced-4524-aef0-0980ec19a935/volumes" Apr 21 10:19:52.133654 kubelet[3389]: I0421 10:19:52.133624 3389 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a5b0c0-4977-4663-9e7c-914b9f04c1cf" path="/var/lib/kubelet/pods/c8a5b0c0-4977-4663-9e7c-914b9f04c1cf/volumes" Apr 21 10:19:52.712998 sshd[5133]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:52.716390 systemd[1]: sshd@20-172.31.28.88:22-50.85.169.122:38472.service: Deactivated successfully. Apr 21 10:19:52.721706 systemd-logind[2076]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:19:52.722244 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:19:52.723713 systemd-logind[2076]: Removed session 21. Apr 21 10:19:52.878462 systemd[1]: Started sshd@21-172.31.28.88:22-50.85.169.122:50308.service - OpenSSH per-connection server daemon (50.85.169.122:50308). Apr 21 10:19:53.586193 ntpd[2060]: Deleting interface #10 lxc_health, fe80::5406:4bff:fee2:4a58%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Apr 21 10:19:53.586617 ntpd[2060]: 21 Apr 10:19:53 ntpd[2060]: Deleting interface #10 lxc_health, fe80::5406:4bff:fee2:4a58%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Apr 21 10:19:53.888006 sshd[5301]: Accepted publickey for core from 50.85.169.122 port 50308 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:53.889818 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:53.895125 systemd-logind[2076]: New session 22 of user core. Apr 21 10:19:53.902417 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:19:54.306252 kubelet[3389]: E0421 10:19:54.306163 3389 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 21 10:19:55.264516 kubelet[3389]: I0421 10:19:55.264457 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a74635e-f397-4dd6-8402-aa1087ee7b65-hubble-tls\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264516 kubelet[3389]: I0421 10:19:55.264522 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-cilium-run\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264708 kubelet[3389]: I0421 10:19:55.264543 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-etc-cni-netd\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264708 kubelet[3389]: I0421 10:19:55.264568 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a74635e-f397-4dd6-8402-aa1087ee7b65-clustermesh-secrets\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264708 kubelet[3389]: I0421 10:19:55.264587 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-bpf-maps\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264708 kubelet[3389]: I0421 10:19:55.264611 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-cilium-cgroup\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264708 kubelet[3389]: I0421 10:19:55.264631 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-lib-modules\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264708 kubelet[3389]: I0421 10:19:55.264650 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-xtables-lock\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264983 kubelet[3389]: I0421 10:19:55.264675 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a74635e-f397-4dd6-8402-aa1087ee7b65-cilium-config-path\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264983 kubelet[3389]: I0421 10:19:55.264699 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a74635e-f397-4dd6-8402-aa1087ee7b65-cilium-ipsec-secrets\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264983 kubelet[3389]: I0421 10:19:55.264722 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-host-proc-sys-net\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264983 kubelet[3389]: I0421 10:19:55.264745 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-host-proc-sys-kernel\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.264983 kubelet[3389]: I0421 10:19:55.264770 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fqc\" (UniqueName: \"kubernetes.io/projected/8a74635e-f397-4dd6-8402-aa1087ee7b65-kube-api-access-59fqc\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.265154 kubelet[3389]: I0421 10:19:55.264797 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-hostproc\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.265154 kubelet[3389]: I0421 10:19:55.264821 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a74635e-f397-4dd6-8402-aa1087ee7b65-cni-path\") pod \"cilium-hqrbm\" (UID: \"8a74635e-f397-4dd6-8402-aa1087ee7b65\") " pod="kube-system/cilium-hqrbm" Apr 21 10:19:55.320655 sshd[5301]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:55.325621 systemd[1]: sshd@21-172.31.28.88:22-50.85.169.122:50308.service: Deactivated successfully. Apr 21 10:19:55.330581 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:19:55.332012 systemd-logind[2076]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:19:55.333381 systemd-logind[2076]: Removed session 22. Apr 21 10:19:55.474903 containerd[2102]: time="2026-04-21T10:19:55.474861593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqrbm,Uid:8a74635e-f397-4dd6-8402-aa1087ee7b65,Namespace:kube-system,Attempt:0,}" Apr 21 10:19:55.490189 systemd[1]: Started sshd@22-172.31.28.88:22-50.85.169.122:50320.service - OpenSSH per-connection server daemon (50.85.169.122:50320). Apr 21 10:19:55.521422 containerd[2102]: time="2026-04-21T10:19:55.521233682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:19:55.521814 containerd[2102]: time="2026-04-21T10:19:55.521553960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:19:55.521814 containerd[2102]: time="2026-04-21T10:19:55.521577102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:55.522464 containerd[2102]: time="2026-04-21T10:19:55.522332977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:19:55.571804 containerd[2102]: time="2026-04-21T10:19:55.571773592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqrbm,Uid:8a74635e-f397-4dd6-8402-aa1087ee7b65,Namespace:kube-system,Attempt:0,} returns sandbox id \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\"" Apr 21 10:19:55.581701 containerd[2102]: time="2026-04-21T10:19:55.581662450Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 21 10:19:55.601182 containerd[2102]: time="2026-04-21T10:19:55.601146062Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ed8443e234069879a316eab9bb770c28b620e79fd0e93ea5040417873791070a\"" Apr 21 10:19:55.603673 containerd[2102]: time="2026-04-21T10:19:55.603633946Z" level=info msg="StartContainer for \"ed8443e234069879a316eab9bb770c28b620e79fd0e93ea5040417873791070a\"" Apr 21 10:19:55.664098 containerd[2102]: time="2026-04-21T10:19:55.664006197Z" level=info msg="StartContainer for \"ed8443e234069879a316eab9bb770c28b620e79fd0e93ea5040417873791070a\" returns successfully" Apr 21 10:19:55.734688 containerd[2102]: time="2026-04-21T10:19:55.734583092Z" level=info msg="shim disconnected" id=ed8443e234069879a316eab9bb770c28b620e79fd0e93ea5040417873791070a namespace=k8s.io Apr 21 10:19:55.734688 containerd[2102]: time="2026-04-21T10:19:55.734665295Z" level=warning msg="cleaning up after shim disconnected" id=ed8443e234069879a316eab9bb770c28b620e79fd0e93ea5040417873791070a namespace=k8s.io Apr 21 10:19:55.734688 containerd[2102]: time="2026-04-21T10:19:55.734680597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:55.767252 containerd[2102]: time="2026-04-21T10:19:55.767204204Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 21 10:19:55.791305 containerd[2102]: time="2026-04-21T10:19:55.790714068Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8e1a4ef416574ab42cc89cfb0d8d02a6bdbf1306ea3be230ac44a5e7816f0452\"" Apr 21 10:19:55.792304 containerd[2102]: time="2026-04-21T10:19:55.791522130Z" level=info msg="StartContainer for \"8e1a4ef416574ab42cc89cfb0d8d02a6bdbf1306ea3be230ac44a5e7816f0452\"" Apr 21 10:19:55.856881 containerd[2102]: time="2026-04-21T10:19:55.856841666Z" level=info msg="StartContainer for \"8e1a4ef416574ab42cc89cfb0d8d02a6bdbf1306ea3be230ac44a5e7816f0452\" returns successfully" Apr 21 10:19:55.898432 containerd[2102]: time="2026-04-21T10:19:55.898352791Z" level=info msg="shim disconnected" id=8e1a4ef416574ab42cc89cfb0d8d02a6bdbf1306ea3be230ac44a5e7816f0452 namespace=k8s.io Apr 21 10:19:55.898432 containerd[2102]: time="2026-04-21T10:19:55.898416444Z" level=warning msg="cleaning up after shim disconnected" id=8e1a4ef416574ab42cc89cfb0d8d02a6bdbf1306ea3be230ac44a5e7816f0452 namespace=k8s.io Apr 21 10:19:55.898432 containerd[2102]: time="2026-04-21T10:19:55.898432360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:56.495278 sshd[5317]: Accepted publickey for core from 50.85.169.122 port 50320 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:56.497027 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:56.502123 systemd-logind[2076]: New session 23 of user core. Apr 21 10:19:56.505363 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 10:19:56.724831 kubelet[3389]: I0421 10:19:56.724776 3389 setters.go:618] "Node became not ready" node="ip-172-31-28-88" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-21T10:19:56Z","lastTransitionTime":"2026-04-21T10:19:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 21 10:19:56.774665 containerd[2102]: time="2026-04-21T10:19:56.774387574Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 21 10:19:56.824987 containerd[2102]: time="2026-04-21T10:19:56.824921545Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"272d83b2844a3ed6da1a482c7db94bd9d2639223cc9c987f2e57e8e2e7f42e5b\"" Apr 21 10:19:56.825811 containerd[2102]: time="2026-04-21T10:19:56.825780816Z" level=info msg="StartContainer for \"272d83b2844a3ed6da1a482c7db94bd9d2639223cc9c987f2e57e8e2e7f42e5b\"" Apr 21 10:19:56.898285 containerd[2102]: time="2026-04-21T10:19:56.898158007Z" level=info msg="StartContainer for \"272d83b2844a3ed6da1a482c7db94bd9d2639223cc9c987f2e57e8e2e7f42e5b\" returns successfully" Apr 21 10:19:56.944487 containerd[2102]: time="2026-04-21T10:19:56.944414437Z" level=info msg="shim disconnected" id=272d83b2844a3ed6da1a482c7db94bd9d2639223cc9c987f2e57e8e2e7f42e5b namespace=k8s.io Apr 21 10:19:56.944487 containerd[2102]: time="2026-04-21T10:19:56.944481306Z" level=warning msg="cleaning up after shim disconnected" id=272d83b2844a3ed6da1a482c7db94bd9d2639223cc9c987f2e57e8e2e7f42e5b namespace=k8s.io Apr 21 10:19:56.944487 containerd[2102]: time="2026-04-21T10:19:56.944493965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:57.183754 sshd[5317]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:57.191608 systemd-logind[2076]: Session 23 logged out. Waiting for processes to exit. Apr 21 10:19:57.193074 systemd[1]: sshd@22-172.31.28.88:22-50.85.169.122:50320.service: Deactivated successfully. Apr 21 10:19:57.198469 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 10:19:57.199580 systemd-logind[2076]: Removed session 23. Apr 21 10:19:57.357895 systemd[1]: Started sshd@23-172.31.28.88:22-50.85.169.122:50324.service - OpenSSH per-connection server daemon (50.85.169.122:50324). Apr 21 10:19:57.375500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-272d83b2844a3ed6da1a482c7db94bd9d2639223cc9c987f2e57e8e2e7f42e5b-rootfs.mount: Deactivated successfully. Apr 21 10:19:57.777646 containerd[2102]: time="2026-04-21T10:19:57.777604206Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 21 10:19:57.806597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount75486913.mount: Deactivated successfully. Apr 21 10:19:57.809456 containerd[2102]: time="2026-04-21T10:19:57.809412064Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"179c74934be385631cb8a4f45a195383c9632cdd5d4aed2d0b6aabcb32c051d6\"" Apr 21 10:19:57.811152 containerd[2102]: time="2026-04-21T10:19:57.810308619Z" level=info msg="StartContainer for \"179c74934be385631cb8a4f45a195383c9632cdd5d4aed2d0b6aabcb32c051d6\"" Apr 21 10:19:57.886661 containerd[2102]: time="2026-04-21T10:19:57.886616149Z" level=info msg="StartContainer for \"179c74934be385631cb8a4f45a195383c9632cdd5d4aed2d0b6aabcb32c051d6\" returns successfully" Apr 21 10:19:57.946626 containerd[2102]: time="2026-04-21T10:19:57.946555161Z" level=info msg="shim disconnected" id=179c74934be385631cb8a4f45a195383c9632cdd5d4aed2d0b6aabcb32c051d6 namespace=k8s.io Apr 21 10:19:57.946626 containerd[2102]: time="2026-04-21T10:19:57.946621576Z" level=warning msg="cleaning up after shim disconnected" id=179c74934be385631cb8a4f45a195383c9632cdd5d4aed2d0b6aabcb32c051d6 namespace=k8s.io Apr 21 10:19:57.946626 containerd[2102]: time="2026-04-21T10:19:57.946633691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:58.364627 sshd[5544]: Accepted publickey for core from 50.85.169.122 port 50324 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:58.370305 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:58.381013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-179c74934be385631cb8a4f45a195383c9632cdd5d4aed2d0b6aabcb32c051d6-rootfs.mount: Deactivated successfully. Apr 21 10:19:58.396133 systemd-logind[2076]: New session 24 of user core. Apr 21 10:19:58.401437 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 10:19:58.781724 containerd[2102]: time="2026-04-21T10:19:58.781505447Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 21 10:19:58.811602 containerd[2102]: time="2026-04-21T10:19:58.811560010Z" level=info msg="CreateContainer within sandbox \"c811e9404e9e19186ce2f5ca108d5309200364fe072ee6dbeb8b8efc5f95c2c7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d\"" Apr 21 10:19:58.814613 containerd[2102]: time="2026-04-21T10:19:58.813559429Z" level=info msg="StartContainer for \"879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d\"" Apr 21 10:19:58.881699 containerd[2102]: time="2026-04-21T10:19:58.881649827Z" level=info msg="StartContainer for \"879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d\" returns successfully" Apr 21 10:19:59.377086 systemd[1]: run-containerd-runc-k8s.io-879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d-runc.DNfLXg.mount: Deactivated successfully. Apr 21 10:19:59.613115 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 21 10:20:04.146836 containerd[2102]: time="2026-04-21T10:20:04.146782958Z" level=info msg="StopPodSandbox for \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\"" Apr 21 10:20:04.148165 containerd[2102]: time="2026-04-21T10:20:04.146903351Z" level=info msg="TearDown network for sandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" successfully" Apr 21 10:20:04.148165 containerd[2102]: time="2026-04-21T10:20:04.146920331Z" level=info msg="StopPodSandbox for \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" returns successfully" Apr 21 10:20:04.148165 containerd[2102]: time="2026-04-21T10:20:04.147581397Z" level=info msg="RemovePodSandbox for \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\"" Apr 21 10:20:04.151531 containerd[2102]: time="2026-04-21T10:20:04.151478014Z" level=info msg="Forcibly stopping sandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\"" Apr 21 10:20:04.151714 containerd[2102]: time="2026-04-21T10:20:04.151603448Z" level=info msg="TearDown network for sandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" successfully" Apr 21 10:20:04.157916 containerd[2102]: time="2026-04-21T10:20:04.156919492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:04.157916 containerd[2102]: time="2026-04-21T10:20:04.157008699Z" level=info msg="RemovePodSandbox \"12675caca024e3e84ee6b29026419b7e6cf2dc5f53d3ac1acd818559d95f34f5\" returns successfully" Apr 21 10:20:04.159496 containerd[2102]: time="2026-04-21T10:20:04.159326230Z" level=info msg="StopPodSandbox for \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\"" Apr 21 10:20:04.159496 containerd[2102]: time="2026-04-21T10:20:04.159409857Z" level=info msg="TearDown network for sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" successfully" Apr 21 10:20:04.159496 containerd[2102]: time="2026-04-21T10:20:04.159420593Z" level=info msg="StopPodSandbox for \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" returns successfully" Apr 21 10:20:04.160244 containerd[2102]: time="2026-04-21T10:20:04.160204960Z" level=info msg="RemovePodSandbox for \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\"" Apr 21 10:20:04.160244 containerd[2102]: time="2026-04-21T10:20:04.160240938Z" level=info msg="Forcibly stopping sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\"" Apr 21 10:20:04.160430 containerd[2102]: time="2026-04-21T10:20:04.160313272Z" level=info msg="TearDown network for sandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" successfully" Apr 21 10:20:04.165912 containerd[2102]: time="2026-04-21T10:20:04.165865589Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:20:04.166165 containerd[2102]: time="2026-04-21T10:20:04.165941348Z" level=info msg="RemovePodSandbox \"7b40c47d44f77e48a687e47349f7c465ee3d9f57c929f9356c412572dd123d20\" returns successfully" Apr 21 10:20:04.291698 systemd[1]: run-containerd-runc-k8s.io-879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d-runc.ogHaWM.mount: Deactivated successfully. Apr 21 10:20:04.492689 systemd-networkd[1655]: lxc_health: Link UP Apr 21 10:20:04.503563 systemd-networkd[1655]: lxc_health: Gained carrier Apr 21 10:20:04.508825 (udev-worker)[6183]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:20:05.509683 kubelet[3389]: I0421 10:20:05.509232 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hqrbm" podStartSLOduration=10.509111085 podStartE2EDuration="10.509111085s" podCreationTimestamp="2026-04-21 10:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:19:59.801823293 +0000 UTC m=+115.898533427" watchObservedRunningTime="2026-04-21 10:20:05.509111085 +0000 UTC m=+121.605821229" Apr 21 10:20:05.815346 systemd-networkd[1655]: lxc_health: Gained IPv6LL Apr 21 10:20:08.586348 ntpd[2060]: Listen normally on 13 lxc_health [fe80::4d1:83ff:feb5:8c1e%14]:123 Apr 21 10:20:08.586959 ntpd[2060]: 21 Apr 10:20:08 ntpd[2060]: Listen normally on 13 lxc_health [fe80::4d1:83ff:feb5:8c1e%14]:123 Apr 21 10:20:08.974778 systemd[1]: run-containerd-runc-k8s.io-879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d-runc.F5ZXCN.mount: Deactivated successfully. Apr 21 10:20:11.155009 systemd[1]: run-containerd-runc-k8s.io-879183f4e62d080f6f7e7b204c9183bf9f69f1970f2d466f92e2fecf7a68294d-runc.1JrbRK.mount: Deactivated successfully. Apr 21 10:20:11.384968 sshd[5544]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:11.389413 systemd[1]: sshd@23-172.31.28.88:22-50.85.169.122:50324.service: Deactivated successfully. Apr 21 10:20:11.394066 systemd-logind[2076]: Session 24 logged out. Waiting for processes to exit. Apr 21 10:20:11.395593 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 10:20:11.397497 systemd-logind[2076]: Removed session 24. Apr 21 10:20:27.661796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0-rootfs.mount: Deactivated successfully. Apr 21 10:20:27.685652 containerd[2102]: time="2026-04-21T10:20:27.685588346Z" level=info msg="shim disconnected" id=66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0 namespace=k8s.io Apr 21 10:20:27.685652 containerd[2102]: time="2026-04-21T10:20:27.685645884Z" level=warning msg="cleaning up after shim disconnected" id=66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0 namespace=k8s.io Apr 21 10:20:27.685652 containerd[2102]: time="2026-04-21T10:20:27.685656903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:20:27.870395 kubelet[3389]: I0421 10:20:27.870293 3389 scope.go:117] "RemoveContainer" containerID="66b1a30fb067ec338a20d7c884ee5f1bbaf4df173c3074d30791087eb128bec0" Apr 21 10:20:27.873433 containerd[2102]: time="2026-04-21T10:20:27.873397118Z" level=info msg="CreateContainer within sandbox \"4e4aa213774e87021561d7ef13ce48bfef121bd924ce9139ff8908c4b6d3aaff\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:20:27.896635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470025165.mount: Deactivated successfully. Apr 21 10:20:27.904115 containerd[2102]: time="2026-04-21T10:20:27.904041552Z" level=info msg="CreateContainer within sandbox \"4e4aa213774e87021561d7ef13ce48bfef121bd924ce9139ff8908c4b6d3aaff\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"868ff56aa99c5fda45097d7c304a8fc760486231005614b48e5f771331fae305\"" Apr 21 10:20:27.904660 containerd[2102]: time="2026-04-21T10:20:27.904626492Z" level=info msg="StartContainer for \"868ff56aa99c5fda45097d7c304a8fc760486231005614b48e5f771331fae305\"" Apr 21 10:20:27.988579 containerd[2102]: time="2026-04-21T10:20:27.988539689Z" level=info msg="StartContainer for \"868ff56aa99c5fda45097d7c304a8fc760486231005614b48e5f771331fae305\" returns successfully" Apr 21 10:20:31.411464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42-rootfs.mount: Deactivated successfully. Apr 21 10:20:31.432122 containerd[2102]: time="2026-04-21T10:20:31.432034322Z" level=info msg="shim disconnected" id=d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42 namespace=k8s.io Apr 21 10:20:31.432122 containerd[2102]: time="2026-04-21T10:20:31.432121068Z" level=warning msg="cleaning up after shim disconnected" id=d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42 namespace=k8s.io Apr 21 10:20:31.433003 containerd[2102]: time="2026-04-21T10:20:31.432133075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:20:31.892398 kubelet[3389]: I0421 10:20:31.892363 3389 scope.go:117] "RemoveContainer" containerID="d760ed4d5c3e84908ca8ba34ceb8a0d8b8e26f04661d642cf62b9c5b34071e42" Apr 21 10:20:31.894800 containerd[2102]: time="2026-04-21T10:20:31.894761139Z" level=info msg="CreateContainer within sandbox \"a5ab1ae37157922a2f0fd8b19136a6fe96c08b3bfc2aaada864b61a32e50b0e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:20:31.923073 containerd[2102]: time="2026-04-21T10:20:31.923007868Z" level=info msg="CreateContainer within sandbox \"a5ab1ae37157922a2f0fd8b19136a6fe96c08b3bfc2aaada864b61a32e50b0e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e6dedad0ec6a19651d8f4530ec14dcd156b4ba28fa7100e7377758ce882fb24a\"" Apr 21 10:20:31.923861 containerd[2102]: time="2026-04-21T10:20:31.923820398Z" level=info msg="StartContainer for \"e6dedad0ec6a19651d8f4530ec14dcd156b4ba28fa7100e7377758ce882fb24a\"" Apr 21 10:20:32.012550 containerd[2102]: time="2026-04-21T10:20:32.012507086Z" level=info msg="StartContainer for \"e6dedad0ec6a19651d8f4530ec14dcd156b4ba28fa7100e7377758ce882fb24a\" returns successfully" Apr 21 10:20:36.619128 kubelet[3389]: E0421 10:20:36.619064 3389 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.88:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-88?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"