Apr 13 20:19:40.029084 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:19:40.029124 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:19:40.029145 kernel: BIOS-provided physical RAM map: Apr 13 20:19:40.029158 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:19:40.029186 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 13 20:19:40.029196 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 13 20:19:40.030396 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 13 20:19:40.030418 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 13 20:19:40.030431 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 13 20:19:40.030450 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 13 20:19:40.030463 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 13 20:19:40.030476 kernel: NX (Execute Disable) protection: active Apr 13 20:19:40.030489 kernel: APIC: Static calls initialized Apr 13 20:19:40.030502 kernel: efi: EFI v2.7 by EDK II Apr 13 20:19:40.030518 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 13 20:19:40.030536 kernel: SMBIOS 2.7 present. Apr 13 20:19:40.030551 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 13 20:19:40.030565 kernel: Hypervisor detected: KVM Apr 13 20:19:40.030579 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:19:40.030591 kernel: kvm-clock: using sched offset of 3772611829 cycles Apr 13 20:19:40.030606 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:19:40.030621 kernel: tsc: Detected 2499.994 MHz processor Apr 13 20:19:40.030636 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:19:40.030650 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:19:40.030685 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 13 20:19:40.030703 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:19:40.030717 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:19:40.030730 kernel: Using GB pages for direct mapping Apr 13 20:19:40.030745 kernel: Secure boot disabled Apr 13 20:19:40.030759 kernel: ACPI: Early table checksum verification disabled Apr 13 20:19:40.030773 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 13 20:19:40.030788 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 13 20:19:40.030803 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 13 20:19:40.030818 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 13 20:19:40.030835 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 13 20:19:40.030850 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 13 20:19:40.030864 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 13 20:19:40.030879 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 13 20:19:40.030893 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 13 20:19:40.030909 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 13 20:19:40.030930 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 13 20:19:40.030948 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 13 20:19:40.030964 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 13 20:19:40.030980 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 13 20:19:40.030995 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 13 20:19:40.031010 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 13 20:19:40.031025 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 13 20:19:40.031041 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 13 20:19:40.031059 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 13 20:19:40.031076 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 13 20:19:40.031091 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 13 20:19:40.031105 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 13 20:19:40.031121 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 13 20:19:40.031135 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 13 20:19:40.031151 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:19:40.031188 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:19:40.031204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 13 20:19:40.031223 kernel: NUMA: Initialized distance table, cnt=1 Apr 13 20:19:40.031239 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 13 20:19:40.031254 kernel: Zone ranges: Apr 13 20:19:40.031269 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:19:40.031284 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 13 20:19:40.031300 kernel: Normal empty Apr 13 20:19:40.031315 kernel: Movable zone start for each node Apr 13 20:19:40.031331 kernel: Early memory node ranges Apr 13 20:19:40.031346 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:19:40.031364 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 13 20:19:40.031380 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 13 20:19:40.031395 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 13 20:19:40.031410 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:19:40.031426 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:19:40.031441 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 13 20:19:40.031456 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 13 20:19:40.031472 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:19:40.031487 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:19:40.031503 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 13 20:19:40.031522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:19:40.031537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:19:40.031552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:19:40.031568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:19:40.031583 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:19:40.031598 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:19:40.031613 kernel: TSC deadline timer available Apr 13 20:19:40.031628 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:19:40.031643 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:19:40.031662 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 13 20:19:40.031676 kernel: Booting paravirtualized kernel on KVM Apr 13 20:19:40.031692 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:19:40.031707 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:19:40.031722 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:19:40.031738 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:19:40.031753 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:19:40.031768 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:19:40.031783 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:19:40.031804 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:19:40.031820 kernel: random: crng init done Apr 13 20:19:40.031835 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:19:40.031850 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:19:40.031866 kernel: Fallback order for Node 0: 0 Apr 13 20:19:40.031881 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 13 20:19:40.031896 kernel: Policy zone: DMA32 Apr 13 20:19:40.031912 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:19:40.031929 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 162900K reserved, 0K cma-reserved) Apr 13 20:19:40.031944 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:19:40.031958 kernel: Kernel/User page tables isolation: enabled Apr 13 20:19:40.031972 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:19:40.031988 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:19:40.032003 kernel: Dynamic Preempt: voluntary Apr 13 20:19:40.032018 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:19:40.032034 kernel: rcu: RCU event tracing is enabled. Apr 13 20:19:40.032049 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:19:40.032068 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:19:40.032082 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:19:40.032097 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:19:40.032112 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:19:40.032126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:19:40.032141 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:19:40.032156 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:19:40.032201 kernel: Console: colour dummy device 80x25 Apr 13 20:19:40.032215 kernel: printk: console [tty0] enabled Apr 13 20:19:40.032229 kernel: printk: console [ttyS0] enabled Apr 13 20:19:40.032243 kernel: ACPI: Core revision 20230628 Apr 13 20:19:40.032257 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 13 20:19:40.032274 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:19:40.032288 kernel: x2apic enabled Apr 13 20:19:40.032301 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:19:40.032317 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Apr 13 20:19:40.032332 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Apr 13 20:19:40.032349 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 13 20:19:40.032363 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 13 20:19:40.032378 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:19:40.032392 kernel: Spectre V2 : Mitigation: Retpolines Apr 13 20:19:40.032406 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:19:40.032421 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 13 20:19:40.032435 kernel: RETBleed: Vulnerable Apr 13 20:19:40.032450 kernel: Speculative Store Bypass: Vulnerable Apr 13 20:19:40.032465 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:19:40.032480 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:19:40.032497 kernel: GDS: Unknown: Dependent on hypervisor status Apr 13 20:19:40.032512 kernel: active return thunk: its_return_thunk Apr 13 20:19:40.032527 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:19:40.032541 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:19:40.032556 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:19:40.032571 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:19:40.032586 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 13 20:19:40.032600 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 13 20:19:40.032615 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:19:40.032630 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:19:40.032645 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:19:40.032663 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:19:40.032678 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:19:40.032693 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 13 20:19:40.032708 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 13 20:19:40.032723 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 13 20:19:40.032738 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 13 20:19:40.032753 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 13 20:19:40.032768 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 13 20:19:40.032783 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 13 20:19:40.032798 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:19:40.032813 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:19:40.032831 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:19:40.032846 kernel: landlock: Up and running. Apr 13 20:19:40.032862 kernel: SELinux: Initializing. Apr 13 20:19:40.032876 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:19:40.032892 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 13 20:19:40.032907 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Apr 13 20:19:40.032922 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:19:40.032938 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:19:40.032954 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:19:40.032968 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 13 20:19:40.032985 kernel: signal: max sigframe size: 3632 Apr 13 20:19:40.033000 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:19:40.033014 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:19:40.033028 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:19:40.033045 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:19:40.033059 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:19:40.033074 kernel: .... node #0, CPUs: #1 Apr 13 20:19:40.033090 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:19:40.033105 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:19:40.033124 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:19:40.033139 kernel: smpboot: Max logical packages: 1 Apr 13 20:19:40.033154 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Apr 13 20:19:40.033189 kernel: devtmpfs: initialized Apr 13 20:19:40.033204 kernel: x86/mm: Memory block size: 128MB Apr 13 20:19:40.033221 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 13 20:19:40.033237 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:19:40.033254 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:19:40.033270 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:19:40.033290 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:19:40.033306 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:19:40.033323 kernel: audit: type=2000 audit(1776111579.411:1): state=initialized audit_enabled=0 res=1 Apr 13 20:19:40.033339 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:19:40.033355 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:19:40.033371 kernel: cpuidle: using governor menu Apr 13 20:19:40.033388 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:19:40.033405 kernel: dca service started, version 1.12.1 Apr 13 20:19:40.033421 kernel: PCI: Using configuration type 1 for base access Apr 13 20:19:40.033441 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:19:40.033457 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:19:40.033474 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:19:40.033490 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:19:40.033506 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:19:40.033522 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:19:40.033538 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:19:40.033555 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:19:40.033571 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:19:40.033591 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:19:40.033608 kernel: ACPI: Interpreter enabled Apr 13 20:19:40.033624 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:19:40.033641 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:19:40.033658 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:19:40.033675 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:19:40.033692 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 13 20:19:40.033708 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:19:40.033950 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:19:40.034122 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:19:40.038504 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:19:40.038553 kernel: acpiphp: Slot [3] registered Apr 13 20:19:40.038573 kernel: acpiphp: Slot [4] registered Apr 13 20:19:40.038591 kernel: acpiphp: Slot [5] registered Apr 13 20:19:40.038609 kernel: acpiphp: Slot [6] registered Apr 13 20:19:40.038625 kernel: acpiphp: Slot [7] registered Apr 13 20:19:40.038650 kernel: acpiphp: Slot [8] registered Apr 13 20:19:40.038681 kernel: acpiphp: Slot [9] registered Apr 13 20:19:40.038697 kernel: acpiphp: Slot [10] registered Apr 13 20:19:40.038714 kernel: acpiphp: Slot [11] registered Apr 13 20:19:40.038732 kernel: acpiphp: Slot [12] registered Apr 13 20:19:40.038750 kernel: acpiphp: Slot [13] registered Apr 13 20:19:40.038765 kernel: acpiphp: Slot [14] registered Apr 13 20:19:40.038781 kernel: acpiphp: Slot [15] registered Apr 13 20:19:40.038799 kernel: acpiphp: Slot [16] registered Apr 13 20:19:40.038815 kernel: acpiphp: Slot [17] registered Apr 13 20:19:40.038835 kernel: acpiphp: Slot [18] registered Apr 13 20:19:40.038853 kernel: acpiphp: Slot [19] registered Apr 13 20:19:40.038870 kernel: acpiphp: Slot [20] registered Apr 13 20:19:40.038888 kernel: acpiphp: Slot [21] registered Apr 13 20:19:40.038905 kernel: acpiphp: Slot [22] registered Apr 13 20:19:40.038921 kernel: acpiphp: Slot [23] registered Apr 13 20:19:40.038938 kernel: acpiphp: Slot [24] registered Apr 13 20:19:40.038955 kernel: acpiphp: Slot [25] registered Apr 13 20:19:40.038973 kernel: acpiphp: Slot [26] registered Apr 13 20:19:40.038996 kernel: acpiphp: Slot [27] registered Apr 13 20:19:40.039013 kernel: acpiphp: Slot [28] registered Apr 13 20:19:40.039031 kernel: acpiphp: Slot [29] registered Apr 13 20:19:40.039047 kernel: acpiphp: Slot [30] registered Apr 13 20:19:40.039063 kernel: acpiphp: Slot [31] registered Apr 13 20:19:40.039079 kernel: PCI host bridge to bus 0000:00 Apr 13 20:19:40.039260 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:19:40.039392 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:19:40.039525 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:19:40.039649 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 13 20:19:40.039774 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 13 20:19:40.039897 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:19:40.040065 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:19:40.041308 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 13 20:19:40.041484 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 13 20:19:40.041631 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:19:40.041769 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 13 20:19:40.041905 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 13 20:19:40.042040 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 13 20:19:40.042199 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 13 20:19:40.042341 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 13 20:19:40.042482 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 13 20:19:40.042635 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 13 20:19:40.042786 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 13 20:19:40.042968 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:19:40.043110 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 13 20:19:40.050022 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:19:40.050226 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 13 20:19:40.050376 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 13 20:19:40.050515 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 13 20:19:40.050645 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 13 20:19:40.050676 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:19:40.050693 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:19:40.050708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:19:40.050724 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:19:40.050739 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:19:40.050760 kernel: iommu: Default domain type: Translated Apr 13 20:19:40.050776 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:19:40.050791 kernel: efivars: Registered efivars operations Apr 13 20:19:40.050807 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:19:40.050822 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:19:40.050839 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 13 20:19:40.050854 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 13 20:19:40.050983 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 13 20:19:40.051128 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 13 20:19:40.051282 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:19:40.051302 kernel: vgaarb: loaded Apr 13 20:19:40.051317 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 13 20:19:40.051334 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 13 20:19:40.051350 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:19:40.051367 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:19:40.051384 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:19:40.051400 kernel: pnp: PnP ACPI init Apr 13 20:19:40.051416 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:19:40.051437 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:19:40.051453 kernel: NET: Registered PF_INET protocol family Apr 13 20:19:40.051470 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:19:40.051486 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 13 20:19:40.051503 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:19:40.051519 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:19:40.051536 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 13 20:19:40.051552 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 13 20:19:40.051572 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:19:40.051588 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 13 20:19:40.051604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:19:40.051621 kernel: NET: Registered PF_XDP protocol family Apr 13 20:19:40.051755 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:19:40.051876 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:19:40.051990 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:19:40.052104 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 13 20:19:40.052317 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 13 20:19:40.052458 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:19:40.052479 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:19:40.052495 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:19:40.052510 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Apr 13 20:19:40.052527 kernel: clocksource: Switched to clocksource tsc Apr 13 20:19:40.052542 kernel: Initialise system trusted keyrings Apr 13 20:19:40.052558 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 13 20:19:40.052573 kernel: Key type asymmetric registered Apr 13 20:19:40.052592 kernel: Asymmetric key parser 'x509' registered Apr 13 20:19:40.052607 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:19:40.052622 kernel: io scheduler mq-deadline registered Apr 13 20:19:40.052638 kernel: io scheduler kyber registered Apr 13 20:19:40.052653 kernel: io scheduler bfq registered Apr 13 20:19:40.052668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:19:40.052684 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:19:40.052699 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:19:40.052714 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:19:40.052732 kernel: i8042: Warning: Keylock active Apr 13 20:19:40.052747 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:19:40.052763 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:19:40.052896 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:19:40.053015 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:19:40.053131 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:19:39 UTC (1776111579) Apr 13 20:19:40.053271 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:19:40.053292 kernel: intel_pstate: CPU model not supported Apr 13 20:19:40.053313 kernel: efifb: probing for efifb Apr 13 20:19:40.053329 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 13 20:19:40.053343 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 13 20:19:40.053360 kernel: efifb: scrolling: redraw Apr 13 20:19:40.053377 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:19:40.053392 kernel: Console: switching to colour frame buffer device 100x37 Apr 13 20:19:40.053410 kernel: fb0: EFI VGA frame buffer device Apr 13 20:19:40.053428 kernel: pstore: Using crash dump compression: deflate Apr 13 20:19:40.053447 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:19:40.053470 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:19:40.053489 kernel: Segment Routing with IPv6 Apr 13 20:19:40.053507 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:19:40.053526 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:19:40.053545 kernel: Key type dns_resolver registered Apr 13 20:19:40.053563 kernel: IPI shorthand broadcast: enabled Apr 13 20:19:40.053614 kernel: sched_clock: Marking stable (576003699, 184693591)->(866241867, -105544577) Apr 13 20:19:40.053638 kernel: registered taskstats version 1 Apr 13 20:19:40.053658 kernel: Loading compiled-in X.509 certificates Apr 13 20:19:40.053682 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:19:40.053701 kernel: Key type .fscrypt registered Apr 13 20:19:40.053721 kernel: Key type fscrypt-provisioning registered Apr 13 20:19:40.053740 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:19:40.053760 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:19:40.053779 kernel: ima: No architecture policies found Apr 13 20:19:40.053799 kernel: clk: Disabling unused clocks Apr 13 20:19:40.053819 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:19:40.053839 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:19:40.053863 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:19:40.053883 kernel: Run /init as init process Apr 13 20:19:40.053902 kernel: with arguments: Apr 13 20:19:40.053921 kernel: /init Apr 13 20:19:40.053941 kernel: with environment: Apr 13 20:19:40.053960 kernel: HOME=/ Apr 13 20:19:40.053980 kernel: TERM=linux Apr 13 20:19:40.054003 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:19:40.054027 systemd[1]: Detected virtualization amazon. Apr 13 20:19:40.054052 systemd[1]: Detected architecture x86-64. Apr 13 20:19:40.054071 systemd[1]: Running in initrd. Apr 13 20:19:40.054091 systemd[1]: No hostname configured, using default hostname. Apr 13 20:19:40.054110 systemd[1]: Hostname set to . Apr 13 20:19:40.054132 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:19:40.054152 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:19:40.054923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:19:40.054958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:19:40.054983 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:19:40.055002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:19:40.055021 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:19:40.055045 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:19:40.055071 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:19:40.055090 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:19:40.055110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:19:40.055129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:19:40.055148 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:19:40.055189 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:19:40.055208 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:19:40.055226 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:19:40.055251 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:19:40.055269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:19:40.055288 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:19:40.055306 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:19:40.055325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:19:40.055345 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:19:40.055364 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:19:40.055383 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:19:40.055406 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:19:40.055425 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:19:40.055444 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:19:40.055462 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:19:40.055480 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:19:40.055499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:19:40.055518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:19:40.055536 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:19:40.055554 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:19:40.055580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:19:40.055599 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:19:40.055677 systemd-journald[179]: Collecting audit messages is disabled. Apr 13 20:19:40.055721 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:19:40.055741 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:19:40.055761 systemd-journald[179]: Journal started Apr 13 20:19:40.055802 systemd-journald[179]: Runtime Journal (/run/log/journal/ec22a39cb46b87cdc1e9af2b861feb55) is 4.7M, max 38.2M, 33.4M free. Apr 13 20:19:40.057855 systemd-modules-load[180]: Inserted module 'overlay' Apr 13 20:19:40.062332 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:19:40.063699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:19:40.071417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:19:40.084880 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:19:40.088428 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:19:40.101831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:19:40.121189 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:19:40.121550 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:19:40.129071 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 13 20:19:40.132301 kernel: Bridge firewalling registered Apr 13 20:19:40.129481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:19:40.131656 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:19:40.132814 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:19:40.138373 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:19:40.153137 dracut-cmdline[207]: dracut-dracut-053 Apr 13 20:19:40.158097 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:19:40.162768 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:19:40.172382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:19:40.222680 systemd-resolved[230]: Positive Trust Anchors: Apr 13 20:19:40.223676 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:19:40.223741 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:19:40.231211 systemd-resolved[230]: Defaulting to hostname 'linux'. Apr 13 20:19:40.234724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:19:40.235626 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:19:40.256201 kernel: SCSI subsystem initialized Apr 13 20:19:40.267200 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:19:40.279201 kernel: iscsi: registered transport (tcp) Apr 13 20:19:40.301871 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:19:40.301955 kernel: QLogic iSCSI HBA Driver Apr 13 20:19:40.343758 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:19:40.350381 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:19:40.377914 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:19:40.378641 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:19:40.378700 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:19:40.423210 kernel: raid6: avx512x4 gen() 15700 MB/s Apr 13 20:19:40.441208 kernel: raid6: avx512x2 gen() 15471 MB/s Apr 13 20:19:40.459214 kernel: raid6: avx512x1 gen() 14591 MB/s Apr 13 20:19:40.477210 kernel: raid6: avx2x4 gen() 15566 MB/s Apr 13 20:19:40.495206 kernel: raid6: avx2x2 gen() 15565 MB/s Apr 13 20:19:40.514156 kernel: raid6: avx2x1 gen() 11791 MB/s Apr 13 20:19:40.514247 kernel: raid6: using algorithm avx512x4 gen() 15700 MB/s Apr 13 20:19:40.533185 kernel: raid6: .... xor() 7752 MB/s, rmw enabled Apr 13 20:19:40.533249 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:19:40.557208 kernel: xor: automatically using best checksumming function avx Apr 13 20:19:40.721199 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:19:40.732128 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:19:40.738382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:19:40.753830 systemd-udevd[398]: Using default interface naming scheme 'v255'. Apr 13 20:19:40.758968 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:19:40.767362 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:19:40.788369 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 13 20:19:40.819627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:19:40.825373 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:19:40.877539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:19:40.888414 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:19:40.909278 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:19:40.912341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:19:40.913248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:19:40.914812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:19:40.922405 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:19:40.954088 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:19:40.985278 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:19:41.012736 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:19:41.013941 kernel: AES CTR mode by8 optimization enabled Apr 13 20:19:41.026926 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:19:41.033405 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 13 20:19:41.033684 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 13 20:19:41.030289 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:19:41.037781 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:19:41.063303 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 13 20:19:41.063627 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 13 20:19:41.063652 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 13 20:19:41.063856 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c5:71:c7:ac:93 Apr 13 20:19:41.064028 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 13 20:19:41.048893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:19:41.049260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:19:41.054364 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:19:41.072184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:19:41.072259 kernel: GPT:9289727 != 33554431 Apr 13 20:19:41.072280 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:19:41.075717 kernel: GPT:9289727 != 33554431 Apr 13 20:19:41.075788 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:19:41.079222 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:19:41.087053 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:19:41.088610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:19:41.116295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:19:41.122507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:19:41.151950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:19:41.172227 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (450) Apr 13 20:19:41.181200 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (453) Apr 13 20:19:41.236508 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 13 20:19:41.279568 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 13 20:19:41.285656 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 13 20:19:41.286423 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 13 20:19:41.298554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 20:19:41.303364 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:19:41.313234 disk-uuid[629]: Primary Header is updated. Apr 13 20:19:41.313234 disk-uuid[629]: Secondary Entries is updated. Apr 13 20:19:41.313234 disk-uuid[629]: Secondary Header is updated. Apr 13 20:19:41.321198 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:19:41.329218 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:19:41.337194 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:19:42.336240 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 13 20:19:42.338431 disk-uuid[630]: The operation has completed successfully. Apr 13 20:19:42.503069 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:19:42.503229 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:19:42.527483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:19:42.531679 sh[973]: Success Apr 13 20:19:42.557190 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:19:42.648801 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:19:42.657056 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:19:42.661116 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:19:42.698883 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:19:42.698962 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:19:42.698984 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:19:42.703511 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:19:42.703586 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:19:42.821219 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:19:42.833159 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:19:42.834509 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:19:42.846459 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:19:42.850378 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:19:42.877720 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:19:42.877801 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:19:42.877823 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:19:42.886256 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:19:42.900436 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:19:42.904517 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:19:42.913443 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:19:42.923124 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:19:42.987442 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:19:42.997400 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:19:43.033273 systemd-networkd[1166]: lo: Link UP Apr 13 20:19:43.033282 systemd-networkd[1166]: lo: Gained carrier Apr 13 20:19:43.035110 systemd-networkd[1166]: Enumeration completed Apr 13 20:19:43.035262 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:19:43.036368 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:19:43.036372 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:19:43.037675 systemd[1]: Reached target network.target - Network. Apr 13 20:19:43.040483 systemd-networkd[1166]: eth0: Link UP Apr 13 20:19:43.040489 systemd-networkd[1166]: eth0: Gained carrier Apr 13 20:19:43.040505 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:19:43.050289 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.31.175/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 20:19:43.192217 ignition[1077]: Ignition 2.19.0 Apr 13 20:19:43.192237 ignition[1077]: Stage: fetch-offline Apr 13 20:19:43.192499 ignition[1077]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:43.194516 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:19:43.192511 ignition[1077]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:43.192853 ignition[1077]: Ignition finished successfully Apr 13 20:19:43.200425 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:19:43.217869 ignition[1175]: Ignition 2.19.0 Apr 13 20:19:43.217888 ignition[1175]: Stage: fetch Apr 13 20:19:43.218376 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:43.218390 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:43.218508 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:43.231465 ignition[1175]: PUT result: OK Apr 13 20:19:43.233539 ignition[1175]: parsed url from cmdline: "" Apr 13 20:19:43.233556 ignition[1175]: no config URL provided Apr 13 20:19:43.233565 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:19:43.233580 ignition[1175]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:19:43.233602 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:43.235049 ignition[1175]: PUT result: OK Apr 13 20:19:43.235101 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 13 20:19:43.236022 ignition[1175]: GET result: OK Apr 13 20:19:43.236160 ignition[1175]: parsing config with SHA512: f36881fcb383e4ece534144d558f3deaf79892f6a0946cc6c0d12f9ac34893330f669565c0dcd287dcb506c709468499c02e4326aa2786beba804062bbfb6035 Apr 13 20:19:43.245027 unknown[1175]: fetched base config from "system" Apr 13 20:19:43.245047 unknown[1175]: fetched base config from "system" Apr 13 20:19:43.245736 ignition[1175]: fetch: fetch complete Apr 13 20:19:43.245055 unknown[1175]: fetched user config from "aws" Apr 13 20:19:43.245745 ignition[1175]: fetch: fetch passed Apr 13 20:19:43.245806 ignition[1175]: Ignition finished successfully Apr 13 20:19:43.248713 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:19:43.253432 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:19:43.272041 ignition[1182]: Ignition 2.19.0 Apr 13 20:19:43.272060 ignition[1182]: Stage: kargs Apr 13 20:19:43.272544 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:43.272558 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:43.272675 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:43.273611 ignition[1182]: PUT result: OK Apr 13 20:19:43.276959 ignition[1182]: kargs: kargs passed Apr 13 20:19:43.277060 ignition[1182]: Ignition finished successfully Apr 13 20:19:43.279398 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:19:43.286384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:19:43.302376 ignition[1188]: Ignition 2.19.0 Apr 13 20:19:43.302394 ignition[1188]: Stage: disks Apr 13 20:19:43.302939 ignition[1188]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:43.302953 ignition[1188]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:43.303070 ignition[1188]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:43.304865 ignition[1188]: PUT result: OK Apr 13 20:19:43.308239 ignition[1188]: disks: disks passed Apr 13 20:19:43.308318 ignition[1188]: Ignition finished successfully Apr 13 20:19:43.310413 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:19:43.311093 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:19:43.311532 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:19:43.312126 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:19:43.312763 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:19:43.313387 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:19:43.324506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:19:43.349581 systemd-fsck[1196]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 13 20:19:43.353621 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:19:43.359325 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:19:43.469188 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:19:43.469788 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:19:43.471215 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:19:43.483312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:19:43.496311 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:19:43.498095 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:19:43.499255 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:19:43.499321 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:19:43.517311 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1215) Apr 13 20:19:43.519186 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:19:43.527411 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:19:43.527447 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:19:43.527465 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:19:43.533415 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:19:43.536639 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:19:43.543881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:19:43.864975 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:19:43.872002 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:19:43.878147 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:19:43.883907 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:19:44.095063 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:19:44.101294 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:19:44.104364 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:19:44.117827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:19:44.119519 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:19:44.156979 ignition[1333]: INFO : Ignition 2.19.0 Apr 13 20:19:44.158408 ignition[1333]: INFO : Stage: mount Apr 13 20:19:44.159378 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:44.159378 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:44.159378 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:44.163000 ignition[1333]: INFO : PUT result: OK Apr 13 20:19:44.163842 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:19:44.166090 ignition[1333]: INFO : mount: mount passed Apr 13 20:19:44.166090 ignition[1333]: INFO : Ignition finished successfully Apr 13 20:19:44.167511 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:19:44.173333 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:19:44.187456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:19:44.208223 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Apr 13 20:19:44.214038 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:19:44.214129 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:19:44.214151 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 13 20:19:44.223198 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 13 20:19:44.225299 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:19:44.255374 ignition[1361]: INFO : Ignition 2.19.0 Apr 13 20:19:44.255374 ignition[1361]: INFO : Stage: files Apr 13 20:19:44.256894 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:44.256894 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:44.256894 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:44.258065 ignition[1361]: INFO : PUT result: OK Apr 13 20:19:44.266825 ignition[1361]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:19:44.267863 ignition[1361]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:19:44.269754 ignition[1361]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:19:44.295027 ignition[1361]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:19:44.296139 ignition[1361]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:19:44.296139 ignition[1361]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:19:44.295657 unknown[1361]: wrote ssh authorized keys file for user: core Apr 13 20:19:44.299477 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:19:44.300590 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:19:44.300590 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:19:44.300590 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:19:44.373319 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 20:19:44.541816 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:19:44.541816 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 20:19:44.541816 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 13 20:19:44.585369 systemd-networkd[1166]: eth0: Gained IPv6LL Apr 13 20:19:44.812067 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 13 20:19:44.947636 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:19:44.949556 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:19:44.957910 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:19:45.347264 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 13 20:19:45.695795 ignition[1361]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:19:45.695795 ignition[1361]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:19:45.699305 ignition[1361]: INFO : files: files passed Apr 13 20:19:45.699305 ignition[1361]: INFO : Ignition finished successfully Apr 13 20:19:45.699814 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:19:45.710519 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:19:45.716697 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:19:45.719681 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:19:45.719817 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:19:45.745197 initrd-setup-root-after-ignition[1391]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:19:45.745197 initrd-setup-root-after-ignition[1391]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:19:45.748396 initrd-setup-root-after-ignition[1395]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:19:45.748761 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:19:45.750111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:19:45.768491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:19:45.800778 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:19:45.800922 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:19:45.802223 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:19:45.803511 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:19:45.804385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:19:45.815705 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:19:45.830829 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:19:45.835601 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:19:45.858545 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:19:45.859472 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:19:45.860479 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:19:45.861400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:19:45.861586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:19:45.862905 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:19:45.863838 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:19:45.864658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:19:45.865478 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:19:45.866286 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:19:45.867206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:19:45.867980 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:19:45.868813 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:19:45.869955 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:19:45.870905 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:19:45.871616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:19:45.871799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:19:45.872949 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:19:45.873791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:19:45.874518 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:19:45.874982 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:19:45.875610 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:19:45.875791 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:19:45.877282 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:19:45.877476 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:19:45.878227 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:19:45.878385 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:19:45.886573 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:19:45.890699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:19:45.891418 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:19:45.891680 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:19:45.894593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:19:45.895147 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:19:45.908929 ignition[1415]: INFO : Ignition 2.19.0 Apr 13 20:19:45.909602 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:19:45.909748 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:19:45.912629 ignition[1415]: INFO : Stage: umount Apr 13 20:19:45.912629 ignition[1415]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:19:45.912629 ignition[1415]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 13 20:19:45.912629 ignition[1415]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 13 20:19:45.917133 ignition[1415]: INFO : PUT result: OK Apr 13 20:19:45.917794 ignition[1415]: INFO : umount: umount passed Apr 13 20:19:45.918377 ignition[1415]: INFO : Ignition finished successfully Apr 13 20:19:45.919320 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:19:45.919468 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:19:45.921504 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:19:45.921615 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:19:45.922272 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:19:45.922338 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:19:45.923297 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:19:45.923359 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:19:45.924091 systemd[1]: Stopped target network.target - Network. Apr 13 20:19:45.925001 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:19:45.925071 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:19:45.925842 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:19:45.926616 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:19:45.930257 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:19:45.931259 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:19:45.931725 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:19:45.932262 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:19:45.932323 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:19:45.932809 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:19:45.932854 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:19:45.934276 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:19:45.934345 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:19:45.935458 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:19:45.935520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:19:45.936512 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:19:45.939422 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:19:45.944706 systemd-networkd[1166]: eth0: DHCPv6 lease lost Apr 13 20:19:45.945102 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:19:45.948018 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:19:45.948203 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:19:45.949299 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:19:45.949348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:19:45.955451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:19:45.955975 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:19:45.956061 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:19:45.957630 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:19:45.961957 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:19:45.962528 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:19:45.969030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:19:45.969624 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:19:45.970351 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:19:45.970415 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:19:45.971489 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:19:45.971550 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:19:45.972832 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:19:45.973023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:19:45.983545 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:19:45.983642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:19:45.984854 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:19:45.984910 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:19:45.986188 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:19:45.986262 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:19:45.987781 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:19:45.987849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:19:45.989265 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:19:45.989339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:19:45.996473 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:19:45.997958 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:19:45.998040 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:19:45.998924 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:19:45.998998 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:19:45.999699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:19:45.999762 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:19:46.002297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:19:46.002369 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:19:46.003841 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:19:46.003969 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:19:46.009890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:19:46.010399 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:19:46.068575 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:19:46.068737 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:19:46.070019 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:19:46.070891 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:19:46.070976 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:19:46.082475 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:19:46.093563 systemd[1]: Switching root. Apr 13 20:19:46.122436 systemd-journald[179]: Journal stopped Apr 13 20:19:47.913038 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 13 20:19:47.913141 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:19:47.915790 kernel: SELinux: policy capability open_perms=1 Apr 13 20:19:47.915830 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:19:47.915858 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:19:47.915877 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:19:47.915896 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:19:47.915913 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:19:47.915936 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:19:47.915959 kernel: audit: type=1403 audit(1776111586.691:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:19:47.915981 systemd[1]: Successfully loaded SELinux policy in 56.252ms. Apr 13 20:19:47.916014 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.238ms. Apr 13 20:19:47.916036 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:19:47.916063 systemd[1]: Detected virtualization amazon. Apr 13 20:19:47.916085 systemd[1]: Detected architecture x86-64. Apr 13 20:19:47.916106 systemd[1]: Detected first boot. Apr 13 20:19:47.916138 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:19:47.916158 zram_generator::config[1474]: No configuration found. Apr 13 20:19:47.916205 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:19:47.916225 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:19:47.916247 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 13 20:19:47.916273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:19:47.916298 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:19:47.916320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:19:47.916341 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:19:47.916366 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:19:47.916388 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:19:47.916416 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:19:47.916438 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:19:47.916458 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:19:47.916481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:19:47.916503 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:19:47.916523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:19:47.916548 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:19:47.916570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:19:47.916592 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:19:47.916614 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:19:47.916635 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:19:47.916658 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:19:47.916679 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:19:47.916700 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:19:47.916724 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:19:47.916747 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:19:47.916768 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:19:47.916790 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:19:47.916812 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:19:47.916833 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:19:47.916851 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:19:47.916872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:19:47.916892 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:19:47.916914 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:19:47.916938 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:19:47.916960 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:19:47.916982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:47.917004 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:19:47.917026 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:19:47.917048 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:19:47.917069 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:19:47.917090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:19:47.917115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:19:47.917137 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:19:47.917158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:19:47.917194 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:19:47.917216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:19:47.917238 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:19:47.917259 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:19:47.917281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:19:47.917303 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 20:19:47.917329 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 20:19:47.917351 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:19:47.917370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:19:47.917390 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:19:47.917409 kernel: loop: module loaded Apr 13 20:19:47.917473 systemd-journald[1574]: Collecting audit messages is disabled. Apr 13 20:19:47.917510 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:19:47.917534 systemd-journald[1574]: Journal started Apr 13 20:19:47.917575 systemd-journald[1574]: Runtime Journal (/run/log/journal/ec22a39cb46b87cdc1e9af2b861feb55) is 4.7M, max 38.2M, 33.4M free. Apr 13 20:19:47.945227 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:19:47.945334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:47.945364 kernel: fuse: init (API version 7.39) Apr 13 20:19:47.956215 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:19:47.960559 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:19:47.962412 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:19:47.966129 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:19:47.970322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:19:47.971273 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:19:47.975431 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:19:47.977705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:19:47.979193 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:19:47.981877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:19:47.984053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:19:47.985284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:19:47.988393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:19:47.988661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:19:47.991536 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:19:47.991812 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:19:47.992907 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:19:47.993133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:19:47.994627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:19:47.995768 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:19:47.996848 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:19:48.019853 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:19:48.032346 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:19:48.045294 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:19:48.048294 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:19:48.064037 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:19:48.094204 kernel: ACPI: bus type drm_connector registered Apr 13 20:19:48.091930 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:19:48.094520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:19:48.109368 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:19:48.112393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:19:48.118364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:19:48.125450 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:19:48.138319 systemd-journald[1574]: Time spent on flushing to /var/log/journal/ec22a39cb46b87cdc1e9af2b861feb55 is 81.538ms for 968 entries. Apr 13 20:19:48.138319 systemd-journald[1574]: System Journal (/var/log/journal/ec22a39cb46b87cdc1e9af2b861feb55) is 8.0M, max 195.6M, 187.6M free. Apr 13 20:19:48.252397 systemd-journald[1574]: Received client request to flush runtime journal. Apr 13 20:19:48.140565 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:19:48.142646 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:19:48.154464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:19:48.156440 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:19:48.159454 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:19:48.160496 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:19:48.172989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:19:48.206781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:19:48.224432 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:19:48.239659 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:19:48.259366 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:19:48.263661 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Apr 13 20:19:48.263690 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Apr 13 20:19:48.277991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:19:48.289696 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:19:48.290992 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:19:48.360741 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:19:48.371436 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:19:48.397455 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Apr 13 20:19:48.397947 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Apr 13 20:19:48.404873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:19:48.884118 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:19:48.891410 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:19:48.919895 systemd-udevd[1654]: Using default interface naming scheme 'v255'. Apr 13 20:19:48.973974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:19:48.981512 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:19:49.017632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:19:49.048675 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 20:19:49.100190 (udev-worker)[1669]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:19:49.122151 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:19:49.173193 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 13 20:19:49.208198 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:19:49.239204 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:19:49.255129 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 13 20:19:49.257472 systemd-networkd[1658]: lo: Link UP Apr 13 20:19:49.260301 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:19:49.260368 systemd-networkd[1658]: lo: Gained carrier Apr 13 20:19:49.264420 systemd-networkd[1658]: Enumeration completed Apr 13 20:19:49.264973 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:19:49.269192 kernel: ACPI: button: Sleep Button [SLPF] Apr 13 20:19:49.270932 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:19:49.270944 systemd-networkd[1658]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:19:49.277760 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:19:49.278087 systemd-networkd[1658]: eth0: Link UP Apr 13 20:19:49.278471 systemd-networkd[1658]: eth0: Gained carrier Apr 13 20:19:49.278499 systemd-networkd[1658]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:19:49.281468 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:19:49.289333 systemd-networkd[1658]: eth0: DHCPv4 address 172.31.31.175/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 13 20:19:49.309193 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:19:49.341248 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1666) Apr 13 20:19:49.343616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:19:49.365087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:19:49.365658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:19:49.386386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:19:49.511659 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:19:49.534475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:19:49.541774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 13 20:19:49.547403 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:19:49.564332 lvm[1781]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:19:49.594854 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:19:49.596613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:19:49.607579 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:19:49.613685 lvm[1784]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:19:49.643031 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:19:49.644707 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:19:49.645666 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:19:49.645707 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:19:49.646866 systemd[1]: Reached target machines.target - Containers. Apr 13 20:19:49.648871 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:19:49.653465 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:19:49.663603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:19:49.664986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:19:49.667542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:19:49.679406 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:19:49.683710 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:19:49.687718 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:19:49.706007 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:19:49.720957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:19:49.722724 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:19:49.738213 kernel: loop0: detected capacity change from 0 to 140768 Apr 13 20:19:49.878905 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:19:49.902263 kernel: loop1: detected capacity change from 0 to 61336 Apr 13 20:19:49.966257 kernel: loop2: detected capacity change from 0 to 142488 Apr 13 20:19:50.037200 kernel: loop3: detected capacity change from 0 to 228704 Apr 13 20:19:50.138198 kernel: loop4: detected capacity change from 0 to 140768 Apr 13 20:19:50.171202 kernel: loop5: detected capacity change from 0 to 61336 Apr 13 20:19:50.204230 kernel: loop6: detected capacity change from 0 to 142488 Apr 13 20:19:50.258220 kernel: loop7: detected capacity change from 0 to 228704 Apr 13 20:19:50.308467 (sd-merge)[1806]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 13 20:19:50.309283 (sd-merge)[1806]: Merged extensions into '/usr'. Apr 13 20:19:50.317055 systemd[1]: Reloading requested from client PID 1792 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:19:50.317081 systemd[1]: Reloading... Apr 13 20:19:50.446213 zram_generator::config[1840]: No configuration found. Apr 13 20:19:50.625295 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:19:50.717684 systemd[1]: Reloading finished in 399 ms. Apr 13 20:19:50.745354 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:19:50.755546 systemd[1]: Starting ensure-sysext.service... Apr 13 20:19:50.760389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:19:50.773984 systemd[1]: Reloading requested from client PID 1891 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:19:50.774012 systemd[1]: Reloading... Apr 13 20:19:50.791211 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:19:50.792273 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:19:50.793835 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:19:50.794011 systemd-networkd[1658]: eth0: Gained IPv6LL Apr 13 20:19:50.795918 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Apr 13 20:19:50.796108 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Apr 13 20:19:50.804907 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:19:50.806205 systemd-tmpfiles[1892]: Skipping /boot Apr 13 20:19:50.830828 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:19:50.830991 systemd-tmpfiles[1892]: Skipping /boot Apr 13 20:19:50.844077 ldconfig[1788]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:19:50.895197 zram_generator::config[1924]: No configuration found. Apr 13 20:19:51.035868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:19:51.111049 systemd[1]: Reloading finished in 335 ms. Apr 13 20:19:51.129819 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:19:51.131337 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:19:51.138114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:19:51.151397 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:19:51.159529 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:19:51.163386 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:19:51.169814 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:19:51.172349 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:19:51.197291 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:51.197645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:19:51.208235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:19:51.222518 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:19:51.228666 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:19:51.231352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:19:51.231565 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:51.240622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:19:51.240885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:19:51.249784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:19:51.250049 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:19:51.264844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:51.269585 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:19:51.269992 augenrules[2014]: No rules Apr 13 20:19:51.282297 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:19:51.296914 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:19:51.299708 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:19:51.299930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:51.301563 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:19:51.305145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:19:51.306722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:19:51.308107 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:19:51.309425 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:19:51.314995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:19:51.316317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:19:51.320615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:19:51.320878 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:19:51.345034 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:51.346701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:19:51.353217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:19:51.363505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:19:51.369911 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:19:51.386502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:19:51.390363 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:19:51.390691 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:19:51.411208 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:19:51.411822 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:19:51.417557 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:19:51.426970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:19:51.427235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:19:51.429040 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:19:51.429312 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:19:51.430567 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:19:51.431014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:19:51.432582 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:19:51.432938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:19:51.438018 systemd[1]: Finished ensure-sysext.service. Apr 13 20:19:51.452634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:19:51.452938 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:19:51.453058 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:19:51.463152 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:19:51.464781 systemd-resolved[1991]: Positive Trust Anchors: Apr 13 20:19:51.465927 systemd-resolved[1991]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:19:51.466048 systemd-resolved[1991]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:19:51.473599 systemd-resolved[1991]: Defaulting to hostname 'linux'. Apr 13 20:19:51.475804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:19:51.476416 systemd[1]: Reached target network.target - Network. Apr 13 20:19:51.476841 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:19:51.477258 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:19:51.477627 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:19:51.478081 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:19:51.478516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:19:51.479160 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:19:51.479657 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:19:51.480013 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:19:51.480439 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:19:51.480483 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:19:51.480830 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:19:51.481828 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:19:51.484055 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:19:51.486554 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:19:51.488396 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:19:51.489259 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:19:51.489902 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:19:51.490871 systemd[1]: System is tainted: cgroupsv1 Apr 13 20:19:51.491046 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:19:51.491190 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:19:51.494299 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:19:51.503434 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:19:51.507376 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:19:51.536193 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:19:51.543500 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:19:51.547921 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:19:51.556334 jq[2060]: false Apr 13 20:19:51.559321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:19:51.572859 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:19:51.588404 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 20:19:51.608383 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:19:51.614489 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:19:51.623329 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 20:19:51.632268 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:19:51.644106 dbus-daemon[2058]: [system] SELinux support is enabled Apr 13 20:19:51.651117 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:19:51.661159 coreos-metadata[2057]: Apr 13 20:19:51.654 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 20:19:51.674314 coreos-metadata[2057]: Apr 13 20:19:51.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found loop4 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found loop5 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found loop6 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found loop7 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p1 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p2 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p3 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found usr Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p4 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p6 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p7 Apr 13 20:19:51.674431 extend-filesystems[2063]: Found nvme0n1p9 Apr 13 20:19:51.674431 extend-filesystems[2063]: Checking size of /dev/nvme0n1p9 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.678 INFO Fetch successful Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.678 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.700 INFO Fetch successful Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.700 INFO Fetch successful Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.707 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.707 INFO Fetch successful Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.707 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.707 INFO Fetch failed with 404: resource not found Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.707 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.708 INFO Fetch successful Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.708 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.710 INFO Fetch successful Apr 13 20:19:51.719305 coreos-metadata[2057]: Apr 13 20:19:51.710 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 13 20:19:51.719849 extend-filesystems[2063]: Resized partition /dev/nvme0n1p9 Apr 13 20:19:51.677022 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:19:51.674771 dbus-daemon[2058]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1658 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:19:51.682388 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:19:51.694386 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:19:51.719295 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:19:51.726390 extend-filesystems[2095]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:19:51.728911 coreos-metadata[2057]: Apr 13 20:19:51.725 INFO Fetch successful Apr 13 20:19:51.728911 coreos-metadata[2057]: Apr 13 20:19:51.725 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 13 20:19:51.728911 coreos-metadata[2057]: Apr 13 20:19:51.727 INFO Fetch successful Apr 13 20:19:51.728911 coreos-metadata[2057]: Apr 13 20:19:51.727 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 13 20:19:51.725596 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:19:51.732881 coreos-metadata[2057]: Apr 13 20:19:51.729 INFO Fetch successful Apr 13 20:19:51.738253 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 13 20:19:51.750868 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:19:51.751458 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:19:51.765396 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:19:51.766384 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:19:51.772877 ntpd[2067]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:19:51.772917 ntpd[2067]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: ---------------------------------------------------- Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: corporation. Support and training for ntp-4 are Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: available at https://www.nwtime.org/support Apr 13 20:19:51.773425 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: ---------------------------------------------------- Apr 13 20:19:51.772928 ntpd[2067]: ---------------------------------------------------- Apr 13 20:19:51.772936 ntpd[2067]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:19:51.772945 ntpd[2067]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:19:51.772953 ntpd[2067]: corporation. Support and training for ntp-4 are Apr 13 20:19:51.772962 ntpd[2067]: available at https://www.nwtime.org/support Apr 13 20:19:51.772971 ntpd[2067]: ---------------------------------------------------- Apr 13 20:19:51.789839 ntpd[2067]: proto: precision = 0.079 usec (-24) Apr 13 20:19:51.792334 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: proto: precision = 0.079 usec (-24) Apr 13 20:19:51.804247 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: basedate set to 2026-04-01 Apr 13 20:19:51.804247 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: gps base set to 2026-04-05 (week 2413) Apr 13 20:19:51.801744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:19:51.801553 ntpd[2067]: basedate set to 2026-04-01 Apr 13 20:19:51.802118 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:19:51.801581 ntpd[2067]: gps base set to 2026-04-05 (week 2413) Apr 13 20:19:51.807515 jq[2096]: true Apr 13 20:19:51.820634 update_engine[2087]: I20260413 20:19:51.820523 2087 main.cc:92] Flatcar Update Engine starting Apr 13 20:19:51.841255 update_engine[2087]: I20260413 20:19:51.838553 2087 update_check_scheduler.cc:74] Next update check in 6m38s Apr 13 20:19:51.841797 ntpd[2067]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:19:51.841863 ntpd[2067]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:19:51.841948 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:19:51.841948 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:19:51.843854 ntpd[2067]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:19:51.845295 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:19:51.845295 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listen normally on 3 eth0 172.31.31.175:123 Apr 13 20:19:51.845295 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listen normally on 4 lo [::1]:123 Apr 13 20:19:51.845295 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listen normally on 5 eth0 [fe80::4c5:71ff:fec7:ac93%2]:123 Apr 13 20:19:51.845295 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: Listening on routing socket on fd #22 for interface updates Apr 13 20:19:51.843918 ntpd[2067]: Listen normally on 3 eth0 172.31.31.175:123 Apr 13 20:19:51.843969 ntpd[2067]: Listen normally on 4 lo [::1]:123 Apr 13 20:19:51.844023 ntpd[2067]: Listen normally on 5 eth0 [fe80::4c5:71ff:fec7:ac93%2]:123 Apr 13 20:19:51.844096 ntpd[2067]: Listening on routing socket on fd #22 for interface updates Apr 13 20:19:51.857292 ntpd[2067]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:19:51.859341 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:19:51.859341 ntpd[2067]: 13 Apr 20:19:51 ntpd[2067]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:19:51.857329 ntpd[2067]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:19:51.861890 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:19:51.876058 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:19:51.877137 (ntainerd)[2119]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:19:51.892455 systemd-logind[2083]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:19:51.894300 systemd-logind[2083]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 13 20:19:51.894330 systemd-logind[2083]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:19:51.894783 systemd-logind[2083]: New seat seat0. Apr 13 20:19:51.898388 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:19:51.932945 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:19:51.937522 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:19:51.943635 dbus-daemon[2058]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 20:19:51.945624 jq[2112]: true Apr 13 20:19:51.937569 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:19:51.938898 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:19:51.938939 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:19:51.953551 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:19:51.976492 tar[2108]: linux-amd64/LICENSE Apr 13 20:19:51.976492 tar[2108]: linux-amd64/helm Apr 13 20:19:51.996834 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:19:52.002345 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:19:52.010512 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:19:52.017820 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 20:19:52.036546 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 13 20:19:52.081078 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 13 20:19:52.110410 extend-filesystems[2095]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 13 20:19:52.110410 extend-filesystems[2095]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 20:19:52.110410 extend-filesystems[2095]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 13 20:19:52.143024 extend-filesystems[2063]: Resized filesystem in /dev/nvme0n1p9 Apr 13 20:19:52.114328 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:19:52.116507 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:19:52.184200 bash[2172]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:19:52.191815 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:19:52.203192 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2168) Apr 13 20:19:52.211320 systemd[1]: Starting sshkeys.service... Apr 13 20:19:52.283857 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:19:52.291460 amazon-ssm-agent[2153]: Initializing new seelog logger Apr 13 20:19:52.294769 amazon-ssm-agent[2153]: New Seelog Logger Creation Complete Apr 13 20:19:52.294769 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.294769 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.294769 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 processing appconfig overrides Apr 13 20:19:52.295930 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.295930 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.296048 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 processing appconfig overrides Apr 13 20:19:52.297320 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.297320 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.297320 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 processing appconfig overrides Apr 13 20:19:52.297320 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO Proxy environment variables: Apr 13 20:19:52.296595 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:19:52.336710 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.336710 amazon-ssm-agent[2153]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 13 20:19:52.336863 amazon-ssm-agent[2153]: 2026/04/13 20:19:52 processing appconfig overrides Apr 13 20:19:52.411190 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO https_proxy: Apr 13 20:19:52.457037 sshd_keygen[2107]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:19:52.509381 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO http_proxy: Apr 13 20:19:52.613029 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO no_proxy: Apr 13 20:19:52.619861 coreos-metadata[2200]: Apr 13 20:19:52.618 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 13 20:19:52.623112 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:19:52.629924 coreos-metadata[2200]: Apr 13 20:19:52.629 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 13 20:19:52.637012 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:19:52.639491 coreos-metadata[2200]: Apr 13 20:19:52.639 INFO Fetch successful Apr 13 20:19:52.651135 coreos-metadata[2200]: Apr 13 20:19:52.639 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 13 20:19:52.651135 coreos-metadata[2200]: Apr 13 20:19:52.649 INFO Fetch successful Apr 13 20:19:52.655049 unknown[2200]: wrote ssh authorized keys file for user: core Apr 13 20:19:52.723504 locksmithd[2151]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:19:52.729803 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:19:52.730229 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:19:52.739204 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO Checking if agent identity type OnPrem can be assumed Apr 13 20:19:52.747044 update-ssh-keys[2272]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:19:52.751587 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:19:52.759056 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:19:52.791087 systemd[1]: Finished sshkeys.service. Apr 13 20:19:52.828652 dbus-daemon[2058]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:19:52.829291 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:19:52.840425 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO Checking if agent identity type EC2 can be assumed Apr 13 20:19:52.841900 dbus-daemon[2058]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2150 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:19:52.858530 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:19:52.884522 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:19:52.897667 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:19:52.912110 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:19:52.913027 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:19:52.940737 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO Agent will take identity from EC2 Apr 13 20:19:52.946381 polkitd[2290]: Started polkitd version 121 Apr 13 20:19:52.981693 polkitd[2290]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:19:52.981791 polkitd[2290]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:19:52.984304 polkitd[2290]: Finished loading, compiling and executing 2 rules Apr 13 20:19:52.985367 dbus-daemon[2058]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:19:52.986816 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:19:52.987077 polkitd[2290]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:19:53.029404 systemd-hostnamed[2150]: Hostname set to (transient) Apr 13 20:19:53.029927 systemd-resolved[1991]: System hostname changed to 'ip-172-31-31-175'. Apr 13 20:19:53.041241 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:19:53.062606 containerd[2119]: time="2026-04-13T20:19:53.062500543Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:19:53.139697 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:19:53.191285 containerd[2119]: time="2026-04-13T20:19:53.191202923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.195521 containerd[2119]: time="2026-04-13T20:19:53.195464148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196102 containerd[2119]: time="2026-04-13T20:19:53.195691043Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:19:53.196102 containerd[2119]: time="2026-04-13T20:19:53.195726864Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:19:53.196102 containerd[2119]: time="2026-04-13T20:19:53.195915002Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:19:53.196102 containerd[2119]: time="2026-04-13T20:19:53.195937762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196102 containerd[2119]: time="2026-04-13T20:19:53.196009690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196102 containerd[2119]: time="2026-04-13T20:19:53.196029529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196643 containerd[2119]: time="2026-04-13T20:19:53.196615615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196722 containerd[2119]: time="2026-04-13T20:19:53.196706847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196802 containerd[2119]: time="2026-04-13T20:19:53.196785641Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:19:53.196861 containerd[2119]: time="2026-04-13T20:19:53.196848603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.197043 containerd[2119]: time="2026-04-13T20:19:53.197025971Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.197394 containerd[2119]: time="2026-04-13T20:19:53.197372659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:19:53.197697 containerd[2119]: time="2026-04-13T20:19:53.197673321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:19:53.198224 containerd[2119]: time="2026-04-13T20:19:53.198201918Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:19:53.198414 containerd[2119]: time="2026-04-13T20:19:53.198394759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:19:53.198536 containerd[2119]: time="2026-04-13T20:19:53.198521139Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:19:53.203983 containerd[2119]: time="2026-04-13T20:19:53.203942701Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:19:53.204299 containerd[2119]: time="2026-04-13T20:19:53.204272447Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:19:53.204416 containerd[2119]: time="2026-04-13T20:19:53.204400853Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.205204371Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.205238479Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.205423879Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.205870256Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206000374Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206023567Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206045077Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206065500Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206093747Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206112456Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.206192 containerd[2119]: time="2026-04-13T20:19:53.206137954Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.206672 containerd[2119]: time="2026-04-13T20:19:53.206159705Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.206770 containerd[2119]: time="2026-04-13T20:19:53.206753389Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.206863 containerd[2119]: time="2026-04-13T20:19:53.206848398Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.207072 containerd[2119]: time="2026-04-13T20:19:53.207055235Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:19:53.207214 containerd[2119]: time="2026-04-13T20:19:53.207196409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209213848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209263175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209289008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209310237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209345269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209364680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209384932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209421401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209466284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209504038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209521317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209539003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209579892Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209627878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.210571 containerd[2119]: time="2026-04-13T20:19:53.209661435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.209680958Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.209851518Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.209937436Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.210047434Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.210068341Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.210085363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.210104930Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.210133250Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:19:53.211188 containerd[2119]: time="2026-04-13T20:19:53.210147932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:19:53.212876 containerd[2119]: time="2026-04-13T20:19:53.211654568Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:19:53.212876 containerd[2119]: time="2026-04-13T20:19:53.211782311Z" level=info msg="Connect containerd service" Apr 13 20:19:53.212876 containerd[2119]: time="2026-04-13T20:19:53.211851351Z" level=info msg="using legacy CRI server" Apr 13 20:19:53.212876 containerd[2119]: time="2026-04-13T20:19:53.211862582Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:19:53.212876 containerd[2119]: time="2026-04-13T20:19:53.212046109Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:19:53.213631 containerd[2119]: time="2026-04-13T20:19:53.213603878Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:19:53.214795 containerd[2119]: time="2026-04-13T20:19:53.214216902Z" level=info msg="Start subscribing containerd event" Apr 13 20:19:53.214795 containerd[2119]: time="2026-04-13T20:19:53.214272361Z" level=info msg="Start recovering state" Apr 13 20:19:53.214795 containerd[2119]: time="2026-04-13T20:19:53.214349201Z" level=info msg="Start event monitor" Apr 13 20:19:53.214795 containerd[2119]: time="2026-04-13T20:19:53.214368037Z" level=info msg="Start snapshots syncer" Apr 13 20:19:53.214795 containerd[2119]: time="2026-04-13T20:19:53.214382282Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:19:53.214795 containerd[2119]: time="2026-04-13T20:19:53.214398883Z" level=info msg="Start streaming server" Apr 13 20:19:53.218189 containerd[2119]: time="2026-04-13T20:19:53.216492577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:19:53.218189 containerd[2119]: time="2026-04-13T20:19:53.216620826Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:19:53.221804 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:19:53.223115 containerd[2119]: time="2026-04-13T20:19:53.222890762Z" level=info msg="containerd successfully booted in 0.165806s" Apr 13 20:19:53.240254 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 13 20:19:53.339578 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 13 20:19:53.439836 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 13 20:19:53.445003 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] Starting Core Agent Apr 13 20:19:53.445003 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 13 20:19:53.445003 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [Registrar] Starting registrar module Apr 13 20:19:53.446217 amazon-ssm-agent[2153]: 2026-04-13 20:19:52 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 13 20:19:53.446217 amazon-ssm-agent[2153]: 2026-04-13 20:19:53 INFO [EC2Identity] EC2 registration was successful. Apr 13 20:19:53.446217 amazon-ssm-agent[2153]: 2026-04-13 20:19:53 INFO [CredentialRefresher] credentialRefresher has started Apr 13 20:19:53.446217 amazon-ssm-agent[2153]: 2026-04-13 20:19:53 INFO [CredentialRefresher] Starting credentials refresher loop Apr 13 20:19:53.446217 amazon-ssm-agent[2153]: 2026-04-13 20:19:53 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 13 20:19:53.504292 tar[2108]: linux-amd64/README.md Apr 13 20:19:53.519922 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:19:53.540084 amazon-ssm-agent[2153]: 2026-04-13 20:19:53 INFO [CredentialRefresher] Next credential rotation will be in 32.35832758671667 minutes Apr 13 20:19:54.461051 amazon-ssm-agent[2153]: 2026-04-13 20:19:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 13 20:19:54.564313 amazon-ssm-agent[2153]: 2026-04-13 20:19:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2339) started Apr 13 20:19:54.664906 amazon-ssm-agent[2153]: 2026-04-13 20:19:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 13 20:19:54.717437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:19:54.720661 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:19:54.721702 systemd[1]: Startup finished in 7.680s (kernel) + 8.083s (userspace) = 15.763s. Apr 13 20:19:54.731192 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:19:54.839361 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:19:54.845528 systemd[1]: Started sshd@0-172.31.31.175:22-50.85.169.122:52798.service - OpenSSH per-connection server daemon (50.85.169.122:52798). Apr 13 20:19:55.807526 sshd[2362]: Accepted publickey for core from 50.85.169.122 port 52798 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:55.811868 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:55.824811 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:19:55.833524 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:19:55.840383 systemd-logind[2083]: New session 1 of user core. Apr 13 20:19:55.854417 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:19:55.871102 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:19:55.873191 kubelet[2357]: E0413 20:19:55.873135 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:19:55.886452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:19:55.886885 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:19:55.890908 (systemd)[2373]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:19:56.008103 systemd[2373]: Queued start job for default target default.target. Apr 13 20:19:56.008642 systemd[2373]: Created slice app.slice - User Application Slice. Apr 13 20:19:56.008674 systemd[2373]: Reached target paths.target - Paths. Apr 13 20:19:56.008692 systemd[2373]: Reached target timers.target - Timers. Apr 13 20:19:56.013345 systemd[2373]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:19:56.023909 systemd[2373]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:19:56.024156 systemd[2373]: Reached target sockets.target - Sockets. Apr 13 20:19:56.024288 systemd[2373]: Reached target basic.target - Basic System. Apr 13 20:19:56.024416 systemd[2373]: Reached target default.target - Main User Target. Apr 13 20:19:56.024540 systemd[2373]: Startup finished in 126ms. Apr 13 20:19:56.024698 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:19:56.037413 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:19:56.731559 systemd[1]: Started sshd@1-172.31.31.175:22-50.85.169.122:52806.service - OpenSSH per-connection server daemon (50.85.169.122:52806). Apr 13 20:19:57.749349 sshd[2387]: Accepted publickey for core from 50.85.169.122 port 52806 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:19:57.751063 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:19:57.756097 systemd-logind[2083]: New session 2 of user core. Apr 13 20:19:57.763619 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:19:58.460303 sshd[2387]: pam_unix(sshd:session): session closed for user core Apr 13 20:19:58.466125 systemd[1]: sshd@1-172.31.31.175:22-50.85.169.122:52806.service: Deactivated successfully. Apr 13 20:19:58.466327 systemd-logind[2083]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:19:58.470942 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:19:58.472300 systemd-logind[2083]: Removed session 2. Apr 13 20:19:58.627566 systemd[1]: Started sshd@2-172.31.31.175:22-50.85.169.122:52816.service - OpenSSH per-connection server daemon (50.85.169.122:52816). Apr 13 20:19:59.464580 systemd-resolved[1991]: Clock change detected. Flushing caches. Apr 13 20:20:00.311058 sshd[2395]: Accepted publickey for core from 50.85.169.122 port 52816 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:00.311834 sshd[2395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:00.319817 systemd-logind[2083]: New session 3 of user core. Apr 13 20:20:00.327136 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:20:00.993389 sshd[2395]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:00.998085 systemd[1]: sshd@2-172.31.31.175:22-50.85.169.122:52816.service: Deactivated successfully. Apr 13 20:20:01.002706 systemd-logind[2083]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:20:01.003563 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:20:01.004809 systemd-logind[2083]: Removed session 3. Apr 13 20:20:01.172549 systemd[1]: Started sshd@3-172.31.31.175:22-50.85.169.122:36538.service - OpenSSH per-connection server daemon (50.85.169.122:36538). Apr 13 20:20:02.250246 sshd[2403]: Accepted publickey for core from 50.85.169.122 port 36538 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:02.252587 sshd[2403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:02.265985 systemd-logind[2083]: New session 4 of user core. Apr 13 20:20:02.273545 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:20:02.982015 sshd[2403]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:02.995565 systemd[1]: sshd@3-172.31.31.175:22-50.85.169.122:36538.service: Deactivated successfully. Apr 13 20:20:03.010628 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:20:03.013256 systemd-logind[2083]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:20:03.017500 systemd-logind[2083]: Removed session 4. Apr 13 20:20:03.152712 systemd[1]: Started sshd@4-172.31.31.175:22-50.85.169.122:36544.service - OpenSSH per-connection server daemon (50.85.169.122:36544). Apr 13 20:20:04.180213 sshd[2411]: Accepted publickey for core from 50.85.169.122 port 36544 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:04.182742 sshd[2411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:04.195846 systemd-logind[2083]: New session 5 of user core. Apr 13 20:20:04.203576 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:20:04.732111 sudo[2415]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:20:04.732552 sudo[2415]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:20:04.758265 sudo[2415]: pam_unix(sudo:session): session closed for user root Apr 13 20:20:04.922624 sshd[2411]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:04.926531 systemd[1]: sshd@4-172.31.31.175:22-50.85.169.122:36544.service: Deactivated successfully. Apr 13 20:20:04.931532 systemd-logind[2083]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:20:04.931583 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:20:04.944455 systemd-logind[2083]: Removed session 5. Apr 13 20:20:05.080547 systemd[1]: Started sshd@5-172.31.31.175:22-50.85.169.122:36556.service - OpenSSH per-connection server daemon (50.85.169.122:36556). Apr 13 20:20:06.048448 sshd[2420]: Accepted publickey for core from 50.85.169.122 port 36556 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:06.058749 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:06.083221 systemd-logind[2083]: New session 6 of user core. Apr 13 20:20:06.094459 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:20:06.554460 sudo[2425]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:20:06.554868 sudo[2425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:20:06.559609 sudo[2425]: pam_unix(sudo:session): session closed for user root Apr 13 20:20:06.565893 sudo[2424]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:20:06.566389 sudo[2424]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:20:06.576363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:20:06.582808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:06.587519 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:20:06.621433 auditctl[2429]: No rules Apr 13 20:20:06.622565 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:20:06.622940 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:20:06.637712 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:20:06.685946 augenrules[2451]: No rules Apr 13 20:20:06.688635 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:20:06.691240 sudo[2424]: pam_unix(sudo:session): session closed for user root Apr 13 20:20:06.848381 sshd[2420]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:06.858378 systemd[1]: sshd@5-172.31.31.175:22-50.85.169.122:36556.service: Deactivated successfully. Apr 13 20:20:06.863946 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:20:06.866501 systemd-logind[2083]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:20:06.868788 systemd-logind[2083]: Removed session 6. Apr 13 20:20:07.038552 systemd[1]: Started sshd@6-172.31.31.175:22-50.85.169.122:36568.service - OpenSSH per-connection server daemon (50.85.169.122:36568). Apr 13 20:20:07.079577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:07.086163 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:20:07.198507 kubelet[2470]: E0413 20:20:07.198333 2470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:20:07.205401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:20:07.205759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:20:08.077989 sshd[2463]: Accepted publickey for core from 50.85.169.122 port 36568 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:20:08.079932 sshd[2463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:20:08.085678 systemd-logind[2083]: New session 7 of user core. Apr 13 20:20:08.096533 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:20:08.624003 sudo[2480]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:20:08.624442 sudo[2480]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:20:09.207621 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:20:09.208479 (dockerd)[2496]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:20:09.842146 dockerd[2496]: time="2026-04-13T20:20:09.842076062Z" level=info msg="Starting up" Apr 13 20:20:10.172604 dockerd[2496]: time="2026-04-13T20:20:10.171835359Z" level=info msg="Loading containers: start." Apr 13 20:20:10.576157 kernel: Initializing XFRM netlink socket Apr 13 20:20:10.654735 (udev-worker)[2518]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:20:10.749855 systemd-networkd[1658]: docker0: Link UP Apr 13 20:20:10.770646 dockerd[2496]: time="2026-04-13T20:20:10.770598813Z" level=info msg="Loading containers: done." Apr 13 20:20:10.810437 dockerd[2496]: time="2026-04-13T20:20:10.810379097Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:20:10.810716 dockerd[2496]: time="2026-04-13T20:20:10.810531316Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:20:10.810716 dockerd[2496]: time="2026-04-13T20:20:10.810703604Z" level=info msg="Daemon has completed initialization" Apr 13 20:20:10.880532 dockerd[2496]: time="2026-04-13T20:20:10.880190948Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:20:10.881199 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:20:12.142101 containerd[2119]: time="2026-04-13T20:20:12.141908765Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 20:20:12.831776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166842166.mount: Deactivated successfully. Apr 13 20:20:14.930323 containerd[2119]: time="2026-04-13T20:20:14.930256424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:14.931509 containerd[2119]: time="2026-04-13T20:20:14.931446893Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29989419" Apr 13 20:20:14.932843 containerd[2119]: time="2026-04-13T20:20:14.932786272Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:14.942150 containerd[2119]: time="2026-04-13T20:20:14.940854958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:14.942150 containerd[2119]: time="2026-04-13T20:20:14.942074340Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 2.799969261s" Apr 13 20:20:14.942150 containerd[2119]: time="2026-04-13T20:20:14.942142062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 20:20:14.942817 containerd[2119]: time="2026-04-13T20:20:14.942778382Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 20:20:16.884347 containerd[2119]: time="2026-04-13T20:20:16.884285147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:16.885971 containerd[2119]: time="2026-04-13T20:20:16.885903502Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021909" Apr 13 20:20:16.887444 containerd[2119]: time="2026-04-13T20:20:16.886914722Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:16.890285 containerd[2119]: time="2026-04-13T20:20:16.890242249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:16.891951 containerd[2119]: time="2026-04-13T20:20:16.891899853Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 1.949085591s" Apr 13 20:20:16.892058 containerd[2119]: time="2026-04-13T20:20:16.891952139Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 20:20:16.892992 containerd[2119]: time="2026-04-13T20:20:16.892960795Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 20:20:17.294473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:20:17.301505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:17.665338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:17.680757 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:20:17.726287 kubelet[2708]: E0413 20:20:17.726204 2708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:20:17.732095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:20:17.733558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:20:18.435194 containerd[2119]: time="2026-04-13T20:20:18.435103242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:18.436618 containerd[2119]: time="2026-04-13T20:20:18.436529745Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162753" Apr 13 20:20:18.438905 containerd[2119]: time="2026-04-13T20:20:18.438313026Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:18.441769 containerd[2119]: time="2026-04-13T20:20:18.441719240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:18.446360 containerd[2119]: time="2026-04-13T20:20:18.446294572Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.553272649s" Apr 13 20:20:18.446360 containerd[2119]: time="2026-04-13T20:20:18.446360903Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 20:20:18.449558 containerd[2119]: time="2026-04-13T20:20:18.449499358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 20:20:19.680825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4026909263.mount: Deactivated successfully. Apr 13 20:20:20.324492 containerd[2119]: time="2026-04-13T20:20:20.324421411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:20.325940 containerd[2119]: time="2026-04-13T20:20:20.325696608Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828763" Apr 13 20:20:20.327152 containerd[2119]: time="2026-04-13T20:20:20.326974062Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:20.329819 containerd[2119]: time="2026-04-13T20:20:20.329775145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:20.330989 containerd[2119]: time="2026-04-13T20:20:20.330605171Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.881049341s" Apr 13 20:20:20.330989 containerd[2119]: time="2026-04-13T20:20:20.330652464Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 20:20:20.331209 containerd[2119]: time="2026-04-13T20:20:20.331183562Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 20:20:20.835404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726285338.mount: Deactivated successfully. Apr 13 20:20:22.125262 containerd[2119]: time="2026-04-13T20:20:22.125052057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:22.127713 containerd[2119]: time="2026-04-13T20:20:22.127640614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 13 20:20:22.130150 containerd[2119]: time="2026-04-13T20:20:22.129078947Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:22.135450 containerd[2119]: time="2026-04-13T20:20:22.134543916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:22.136619 containerd[2119]: time="2026-04-13T20:20:22.136533955Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.805306979s" Apr 13 20:20:22.136619 containerd[2119]: time="2026-04-13T20:20:22.136589905Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 20:20:22.138189 containerd[2119]: time="2026-04-13T20:20:22.138026396Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 20:20:22.635729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282921733.mount: Deactivated successfully. Apr 13 20:20:22.641201 containerd[2119]: time="2026-04-13T20:20:22.641145082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:22.642252 containerd[2119]: time="2026-04-13T20:20:22.642173543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 13 20:20:22.643392 containerd[2119]: time="2026-04-13T20:20:22.643338967Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:22.645942 containerd[2119]: time="2026-04-13T20:20:22.645906335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:22.647325 containerd[2119]: time="2026-04-13T20:20:22.646738001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.679765ms" Apr 13 20:20:22.647325 containerd[2119]: time="2026-04-13T20:20:22.646776792Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 20:20:22.647926 containerd[2119]: time="2026-04-13T20:20:22.647899213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 20:20:23.193448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597436567.mount: Deactivated successfully. Apr 13 20:20:23.755622 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:20:24.583025 containerd[2119]: time="2026-04-13T20:20:24.582965862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:24.585146 containerd[2119]: time="2026-04-13T20:20:24.584813249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Apr 13 20:20:24.588008 containerd[2119]: time="2026-04-13T20:20:24.587225358Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:24.597825 containerd[2119]: time="2026-04-13T20:20:24.597773659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:24.599240 containerd[2119]: time="2026-04-13T20:20:24.599194317Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.951255653s" Apr 13 20:20:24.599414 containerd[2119]: time="2026-04-13T20:20:24.599389897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 20:20:27.794463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 20:20:27.806246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:28.109769 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:20:28.110057 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:20:28.110949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:28.121430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:28.165791 systemd[1]: Reloading requested from client PID 2888 ('systemctl') (unit session-7.scope)... Apr 13 20:20:28.165959 systemd[1]: Reloading... Apr 13 20:20:28.312657 zram_generator::config[2928]: No configuration found. Apr 13 20:20:28.475432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:20:28.559611 systemd[1]: Reloading finished in 392 ms. Apr 13 20:20:28.617228 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:20:28.617531 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:20:28.618301 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:28.625083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:29.042373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:29.056848 (kubelet)[3003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:20:29.125146 kubelet[3003]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:20:29.125146 kubelet[3003]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:20:29.125146 kubelet[3003]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:20:29.125146 kubelet[3003]: I0413 20:20:29.123253 3003 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:20:29.411095 kubelet[3003]: I0413 20:20:29.411047 3003 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:20:29.412161 kubelet[3003]: I0413 20:20:29.411290 3003 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:20:29.412161 kubelet[3003]: I0413 20:20:29.411601 3003 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:20:29.458767 kubelet[3003]: I0413 20:20:29.458086 3003 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:20:29.459239 kubelet[3003]: E0413 20:20:29.459208 3003 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:20:29.476267 kubelet[3003]: E0413 20:20:29.476205 3003 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:20:29.476267 kubelet[3003]: I0413 20:20:29.476263 3003 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:20:29.482387 kubelet[3003]: I0413 20:20:29.482346 3003 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:20:29.488194 kubelet[3003]: I0413 20:20:29.488096 3003 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:20:29.492435 kubelet[3003]: I0413 20:20:29.488191 3003 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 20:20:29.492698 kubelet[3003]: I0413 20:20:29.492445 3003 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:20:29.492698 kubelet[3003]: I0413 20:20:29.492469 3003 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:20:29.492698 kubelet[3003]: I0413 20:20:29.492683 3003 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:20:29.500329 kubelet[3003]: I0413 20:20:29.500100 3003 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:20:29.500329 kubelet[3003]: I0413 20:20:29.500195 3003 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:20:29.500329 kubelet[3003]: I0413 20:20:29.500238 3003 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:20:29.503250 kubelet[3003]: I0413 20:20:29.502846 3003 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:20:29.506670 kubelet[3003]: E0413 20:20:29.506612 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-175&limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:20:29.508456 kubelet[3003]: E0413 20:20:29.508390 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:20:29.508775 kubelet[3003]: I0413 20:20:29.508747 3003 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:20:29.511158 kubelet[3003]: I0413 20:20:29.509710 3003 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:20:29.511158 kubelet[3003]: W0413 20:20:29.510798 3003 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:20:29.518871 kubelet[3003]: I0413 20:20:29.518838 3003 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:20:29.519096 kubelet[3003]: I0413 20:20:29.519063 3003 server.go:1289] "Started kubelet" Apr 13 20:20:29.526742 kubelet[3003]: I0413 20:20:29.525489 3003 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:20:29.526742 kubelet[3003]: E0413 20:20:29.524179 3003 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.175:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.175:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-175.18a6041de6ebe8e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-175,UID:ip-172-31-31-175,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-175,},FirstTimestamp:2026-04-13 20:20:29.519022304 +0000 UTC m=+0.455438182,LastTimestamp:2026-04-13 20:20:29.519022304 +0000 UTC m=+0.455438182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-175,}" Apr 13 20:20:29.529128 kubelet[3003]: I0413 20:20:29.529055 3003 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:20:29.530844 kubelet[3003]: I0413 20:20:29.530810 3003 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:20:29.536257 kubelet[3003]: I0413 20:20:29.536225 3003 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:20:29.536540 kubelet[3003]: E0413 20:20:29.536513 3003 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-31-175\" not found" Apr 13 20:20:29.536844 kubelet[3003]: I0413 20:20:29.536817 3003 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:20:29.536918 kubelet[3003]: I0413 20:20:29.536883 3003 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:20:29.538381 kubelet[3003]: I0413 20:20:29.538322 3003 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:20:29.538770 kubelet[3003]: I0413 20:20:29.538737 3003 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:20:29.539009 kubelet[3003]: I0413 20:20:29.538982 3003 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:20:29.539728 kubelet[3003]: E0413 20:20:29.539678 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:20:29.539966 kubelet[3003]: E0413 20:20:29.539779 3003 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-175?timeout=10s\": dial tcp 172.31.31.175:6443: connect: connection refused" interval="200ms" Apr 13 20:20:29.542159 kubelet[3003]: I0413 20:20:29.542092 3003 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:20:29.543172 kubelet[3003]: I0413 20:20:29.542398 3003 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:20:29.548582 kubelet[3003]: E0413 20:20:29.547986 3003 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:20:29.548991 kubelet[3003]: I0413 20:20:29.548971 3003 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:20:29.568971 kubelet[3003]: I0413 20:20:29.568923 3003 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:20:29.573289 kubelet[3003]: I0413 20:20:29.573257 3003 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:20:29.573869 kubelet[3003]: I0413 20:20:29.573520 3003 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:20:29.573869 kubelet[3003]: I0413 20:20:29.573554 3003 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:20:29.573869 kubelet[3003]: I0413 20:20:29.573564 3003 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:20:29.573869 kubelet[3003]: E0413 20:20:29.573628 3003 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:20:29.591600 kubelet[3003]: E0413 20:20:29.591253 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:20:29.593044 kubelet[3003]: I0413 20:20:29.592965 3003 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:20:29.593044 kubelet[3003]: I0413 20:20:29.592984 3003 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:20:29.593044 kubelet[3003]: I0413 20:20:29.593004 3003 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:20:29.596675 kubelet[3003]: I0413 20:20:29.596652 3003 policy_none.go:49] "None policy: Start" Apr 13 20:20:29.596675 kubelet[3003]: I0413 20:20:29.596680 3003 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:20:29.596823 kubelet[3003]: I0413 20:20:29.596695 3003 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:20:29.604159 kubelet[3003]: E0413 20:20:29.602825 3003 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:20:29.604159 kubelet[3003]: I0413 20:20:29.603062 3003 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:20:29.604159 kubelet[3003]: I0413 20:20:29.603076 3003 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:20:29.605083 kubelet[3003]: I0413 20:20:29.605061 3003 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:20:29.608405 kubelet[3003]: E0413 20:20:29.608384 3003 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:20:29.608665 kubelet[3003]: E0413 20:20:29.608654 3003 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-175\" not found" Apr 13 20:20:29.684976 kubelet[3003]: E0413 20:20:29.682866 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:29.692088 kubelet[3003]: E0413 20:20:29.692060 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:29.695190 kubelet[3003]: E0413 20:20:29.695094 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:29.705708 kubelet[3003]: I0413 20:20:29.705657 3003 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-175" Apr 13 20:20:29.706141 kubelet[3003]: E0413 20:20:29.706086 3003 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.175:6443/api/v1/nodes\": dial tcp 172.31.31.175:6443: connect: connection refused" node="ip-172-31-31-175" Apr 13 20:20:29.738596 kubelet[3003]: I0413 20:20:29.738551 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a62b26aeef80b28f6b8bb1587ae7b413-ca-certs\") pod \"kube-apiserver-ip-172-31-31-175\" (UID: \"a62b26aeef80b28f6b8bb1587ae7b413\") " pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:29.738596 kubelet[3003]: I0413 20:20:29.738603 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a62b26aeef80b28f6b8bb1587ae7b413-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-175\" (UID: \"a62b26aeef80b28f6b8bb1587ae7b413\") " pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:29.739141 kubelet[3003]: I0413 20:20:29.738640 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a62b26aeef80b28f6b8bb1587ae7b413-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-175\" (UID: \"a62b26aeef80b28f6b8bb1587ae7b413\") " pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:29.739141 kubelet[3003]: I0413 20:20:29.738728 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:29.739141 kubelet[3003]: I0413 20:20:29.738771 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:29.739141 kubelet[3003]: I0413 20:20:29.738815 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:29.739141 kubelet[3003]: I0413 20:20:29.738844 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:29.739313 kubelet[3003]: I0413 20:20:29.738875 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/638d361d66525be915cda4c8ea871a8b-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-175\" (UID: \"638d361d66525be915cda4c8ea871a8b\") " pod="kube-system/kube-scheduler-ip-172-31-31-175" Apr 13 20:20:29.739313 kubelet[3003]: I0413 20:20:29.738893 3003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:29.741078 kubelet[3003]: E0413 20:20:29.741024 3003 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-175?timeout=10s\": dial tcp 172.31.31.175:6443: connect: connection refused" interval="400ms" Apr 13 20:20:29.909109 kubelet[3003]: I0413 20:20:29.909042 3003 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-175" Apr 13 20:20:29.909826 kubelet[3003]: E0413 20:20:29.909790 3003 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.175:6443/api/v1/nodes\": dial tcp 172.31.31.175:6443: connect: connection refused" node="ip-172-31-31-175" Apr 13 20:20:29.985022 containerd[2119]: time="2026-04-13T20:20:29.984898771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-175,Uid:a62b26aeef80b28f6b8bb1587ae7b413,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:29.998348 containerd[2119]: time="2026-04-13T20:20:29.998226339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-175,Uid:75c0dd3fffbb790a95cb2df6aebc7f02,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:29.998781 containerd[2119]: time="2026-04-13T20:20:29.998225934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-175,Uid:638d361d66525be915cda4c8ea871a8b,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:30.142192 kubelet[3003]: E0413 20:20:30.142138 3003 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-175?timeout=10s\": dial tcp 172.31.31.175:6443: connect: connection refused" interval="800ms" Apr 13 20:20:30.311957 kubelet[3003]: I0413 20:20:30.311922 3003 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-175" Apr 13 20:20:30.312396 kubelet[3003]: E0413 20:20:30.312352 3003 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.175:6443/api/v1/nodes\": dial tcp 172.31.31.175:6443: connect: connection refused" node="ip-172-31-31-175" Apr 13 20:20:30.409201 kubelet[3003]: E0413 20:20:30.409110 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.31.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:20:30.433454 kubelet[3003]: E0413 20:20:30.433240 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.31.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-175&limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:20:30.511916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3822241238.mount: Deactivated successfully. Apr 13 20:20:30.519193 kubelet[3003]: E0413 20:20:30.519104 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:20:30.520194 containerd[2119]: time="2026-04-13T20:20:30.520115642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:20:30.522502 containerd[2119]: time="2026-04-13T20:20:30.522445451Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:20:30.523436 containerd[2119]: time="2026-04-13T20:20:30.523393531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:20:30.524405 containerd[2119]: time="2026-04-13T20:20:30.524363261Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:20:30.526477 containerd[2119]: time="2026-04-13T20:20:30.526362935Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 13 20:20:30.527442 containerd[2119]: time="2026-04-13T20:20:30.527401943Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:20:30.527939 containerd[2119]: time="2026-04-13T20:20:30.527825142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:20:30.531513 containerd[2119]: time="2026-04-13T20:20:30.531410599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:20:30.532750 containerd[2119]: time="2026-04-13T20:20:30.532507136Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.512084ms" Apr 13 20:20:30.536530 containerd[2119]: time="2026-04-13T20:20:30.536466854Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.742405ms" Apr 13 20:20:30.540646 containerd[2119]: time="2026-04-13T20:20:30.540289700Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.960554ms" Apr 13 20:20:30.833504 containerd[2119]: time="2026-04-13T20:20:30.832726345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:30.833504 containerd[2119]: time="2026-04-13T20:20:30.832810876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:30.833504 containerd[2119]: time="2026-04-13T20:20:30.832835706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:30.833504 containerd[2119]: time="2026-04-13T20:20:30.832974490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:30.838928 containerd[2119]: time="2026-04-13T20:20:30.837659082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:30.838928 containerd[2119]: time="2026-04-13T20:20:30.837737714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:30.838928 containerd[2119]: time="2026-04-13T20:20:30.837761598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:30.838928 containerd[2119]: time="2026-04-13T20:20:30.838296611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:30.855173 containerd[2119]: time="2026-04-13T20:20:30.854206391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:30.855173 containerd[2119]: time="2026-04-13T20:20:30.854288451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:30.855173 containerd[2119]: time="2026-04-13T20:20:30.854330058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:30.855173 containerd[2119]: time="2026-04-13T20:20:30.854451300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:30.944322 kubelet[3003]: E0413 20:20:30.944278 3003 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-175?timeout=10s\": dial tcp 172.31.31.175:6443: connect: connection refused" interval="1.6s" Apr 13 20:20:30.980200 containerd[2119]: time="2026-04-13T20:20:30.980087790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-175,Uid:75c0dd3fffbb790a95cb2df6aebc7f02,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c7836bb76eae345199d21240f9874e5c595c3d9b59d15ccb5c16732501728fe\"" Apr 13 20:20:31.000626 containerd[2119]: time="2026-04-13T20:20:31.000034437Z" level=info msg="CreateContainer within sandbox \"1c7836bb76eae345199d21240f9874e5c595c3d9b59d15ccb5c16732501728fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:20:31.004142 containerd[2119]: time="2026-04-13T20:20:31.004076052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-175,Uid:a62b26aeef80b28f6b8bb1587ae7b413,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ad7b5c5235017e0477e08f005b758e64e30bb5c7b7ff0cfa259cd45c328ec3b\"" Apr 13 20:20:31.011201 containerd[2119]: time="2026-04-13T20:20:31.011092908Z" level=info msg="CreateContainer within sandbox \"6ad7b5c5235017e0477e08f005b758e64e30bb5c7b7ff0cfa259cd45c328ec3b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:20:31.018496 containerd[2119]: time="2026-04-13T20:20:31.018438621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-175,Uid:638d361d66525be915cda4c8ea871a8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca636419f93c0acf0efd71e12dd3b94b3c3a5fb2f0896c40c9c5d2bff8249959\"" Apr 13 20:20:31.023709 containerd[2119]: time="2026-04-13T20:20:31.023661012Z" level=info msg="CreateContainer within sandbox \"ca636419f93c0acf0efd71e12dd3b94b3c3a5fb2f0896c40c9c5d2bff8249959\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:20:31.030422 containerd[2119]: time="2026-04-13T20:20:31.030038995Z" level=info msg="CreateContainer within sandbox \"1c7836bb76eae345199d21240f9874e5c595c3d9b59d15ccb5c16732501728fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4\"" Apr 13 20:20:31.031574 containerd[2119]: time="2026-04-13T20:20:31.031375204Z" level=info msg="StartContainer for \"a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4\"" Apr 13 20:20:31.036622 containerd[2119]: time="2026-04-13T20:20:31.036565197Z" level=info msg="CreateContainer within sandbox \"6ad7b5c5235017e0477e08f005b758e64e30bb5c7b7ff0cfa259cd45c328ec3b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eecf5bb977ea5fd6e579ce405bfb6c44e136f9dfa8418a015799c9e47b8a0c7f\"" Apr 13 20:20:31.038884 containerd[2119]: time="2026-04-13T20:20:31.038843919Z" level=info msg="StartContainer for \"eecf5bb977ea5fd6e579ce405bfb6c44e136f9dfa8418a015799c9e47b8a0c7f\"" Apr 13 20:20:31.050864 containerd[2119]: time="2026-04-13T20:20:31.050712801Z" level=info msg="CreateContainer within sandbox \"ca636419f93c0acf0efd71e12dd3b94b3c3a5fb2f0896c40c9c5d2bff8249959\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd\"" Apr 13 20:20:31.052154 containerd[2119]: time="2026-04-13T20:20:31.052015509Z" level=info msg="StartContainer for \"5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd\"" Apr 13 20:20:31.114905 kubelet[3003]: I0413 20:20:31.114358 3003 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-175" Apr 13 20:20:31.115029 kubelet[3003]: E0413 20:20:31.114905 3003 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.31.175:6443/api/v1/nodes\": dial tcp 172.31.31.175:6443: connect: connection refused" node="ip-172-31-31-175" Apr 13 20:20:31.124447 kubelet[3003]: E0413 20:20:31.124406 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.31.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:20:31.188199 containerd[2119]: time="2026-04-13T20:20:31.187440056Z" level=info msg="StartContainer for \"eecf5bb977ea5fd6e579ce405bfb6c44e136f9dfa8418a015799c9e47b8a0c7f\" returns successfully" Apr 13 20:20:31.202691 containerd[2119]: time="2026-04-13T20:20:31.202443473Z" level=info msg="StartContainer for \"a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4\" returns successfully" Apr 13 20:20:31.246647 containerd[2119]: time="2026-04-13T20:20:31.246319704Z" level=info msg="StartContainer for \"5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd\" returns successfully" Apr 13 20:20:31.551148 kubelet[3003]: E0413 20:20:31.550524 3003 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:20:31.620221 kubelet[3003]: E0413 20:20:31.618477 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:31.628143 kubelet[3003]: E0413 20:20:31.626677 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:31.629309 kubelet[3003]: E0413 20:20:31.629020 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:32.132979 kubelet[3003]: E0413 20:20:32.132930 3003 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.31.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:20:32.634216 kubelet[3003]: E0413 20:20:32.632562 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:32.635327 kubelet[3003]: E0413 20:20:32.635145 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:32.635827 kubelet[3003]: E0413 20:20:32.635689 3003 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:32.718199 kubelet[3003]: I0413 20:20:32.717395 3003 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-175" Apr 13 20:20:34.486086 kubelet[3003]: E0413 20:20:34.486023 3003 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-175\" not found" node="ip-172-31-31-175" Apr 13 20:20:34.524674 kubelet[3003]: I0413 20:20:34.524632 3003 apiserver.go:52] "Watching apiserver" Apr 13 20:20:34.537499 kubelet[3003]: I0413 20:20:34.537468 3003 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:20:34.572747 kubelet[3003]: I0413 20:20:34.572678 3003 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-175" Apr 13 20:20:34.572899 kubelet[3003]: E0413 20:20:34.572758 3003 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-175\": node \"ip-172-31-31-175\" not found" Apr 13 20:20:34.638149 kubelet[3003]: I0413 20:20:34.637493 3003 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-175" Apr 13 20:20:34.652785 kubelet[3003]: E0413 20:20:34.652747 3003 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-175\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-175" Apr 13 20:20:34.652785 kubelet[3003]: I0413 20:20:34.652783 3003 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:34.657035 kubelet[3003]: E0413 20:20:34.656985 3003 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-175\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:34.657035 kubelet[3003]: I0413 20:20:34.657022 3003 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:34.661363 kubelet[3003]: E0413 20:20:34.661327 3003 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-175\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:37.033290 systemd[1]: Reloading requested from client PID 3282 ('systemctl') (unit session-7.scope)... Apr 13 20:20:37.033358 systemd[1]: Reloading... Apr 13 20:20:37.141149 zram_generator::config[3318]: No configuration found. Apr 13 20:20:37.301060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:20:37.397125 systemd[1]: Reloading finished in 363 ms. Apr 13 20:20:37.436243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:37.453454 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:20:37.454162 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:37.461926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:20:37.653965 update_engine[2087]: I20260413 20:20:37.653213 2087 update_attempter.cc:509] Updating boot flags... Apr 13 20:20:37.709541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:20:37.732107 (kubelet)[3399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:20:37.787146 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3411) Apr 13 20:20:37.887367 kubelet[3399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:20:37.887367 kubelet[3399]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:20:37.887367 kubelet[3399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:20:37.887367 kubelet[3399]: I0413 20:20:37.883262 3399 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:20:37.904711 kubelet[3399]: I0413 20:20:37.904060 3399 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:20:37.906091 kubelet[3399]: I0413 20:20:37.906060 3399 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:20:37.906814 kubelet[3399]: I0413 20:20:37.906792 3399 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:20:37.915304 kubelet[3399]: I0413 20:20:37.915258 3399 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:20:37.935727 kubelet[3399]: I0413 20:20:37.935690 3399 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:20:37.946980 kubelet[3399]: E0413 20:20:37.946931 3399 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:20:37.946980 kubelet[3399]: I0413 20:20:37.946986 3399 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:20:37.978058 kubelet[3399]: I0413 20:20:37.978016 3399 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:20:37.990148 kubelet[3399]: I0413 20:20:37.989514 3399 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:20:37.994714 kubelet[3399]: I0413 20:20:37.990315 3399 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 20:20:37.994714 kubelet[3399]: I0413 20:20:37.990595 3399 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:20:37.994714 kubelet[3399]: I0413 20:20:37.990613 3399 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:20:37.994714 kubelet[3399]: I0413 20:20:37.990679 3399 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:20:37.994714 kubelet[3399]: I0413 20:20:37.990887 3399 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:20:37.995103 kubelet[3399]: I0413 20:20:37.990916 3399 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:20:37.995103 kubelet[3399]: I0413 20:20:37.990948 3399 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:20:37.995103 kubelet[3399]: I0413 20:20:37.990967 3399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:20:38.015615 kubelet[3399]: I0413 20:20:38.015577 3399 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:20:38.019932 kubelet[3399]: I0413 20:20:38.019893 3399 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:20:38.075199 kubelet[3399]: I0413 20:20:38.075156 3399 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:20:38.075367 kubelet[3399]: I0413 20:20:38.075214 3399 server.go:1289] "Started kubelet" Apr 13 20:20:38.095421 sudo[3505]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 13 20:20:38.095922 sudo[3505]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 13 20:20:38.113155 kubelet[3399]: I0413 20:20:38.111749 3399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:20:38.113155 kubelet[3399]: I0413 20:20:38.112234 3399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:20:38.113155 kubelet[3399]: I0413 20:20:38.112110 3399 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:20:38.113155 kubelet[3399]: I0413 20:20:38.112550 3399 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:20:38.116416 kubelet[3399]: I0413 20:20:38.114162 3399 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:20:38.122154 kubelet[3399]: I0413 20:20:38.119556 3399 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:20:38.123143 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3389) Apr 13 20:20:38.125490 kubelet[3399]: I0413 20:20:38.125467 3399 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:20:38.132320 kubelet[3399]: I0413 20:20:38.125618 3399 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:20:38.140701 kubelet[3399]: I0413 20:20:38.140669 3399 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:20:38.162954 kubelet[3399]: I0413 20:20:38.162854 3399 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:20:38.188657 kubelet[3399]: E0413 20:20:38.188629 3399 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:20:38.205346 kubelet[3399]: I0413 20:20:38.205320 3399 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:20:38.205556 kubelet[3399]: I0413 20:20:38.205546 3399 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:20:38.263824 kubelet[3399]: I0413 20:20:38.262300 3399 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:20:38.265423 kubelet[3399]: I0413 20:20:38.265381 3399 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:20:38.265882 kubelet[3399]: I0413 20:20:38.265866 3399 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:20:38.266128 kubelet[3399]: I0413 20:20:38.266104 3399 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:20:38.266665 kubelet[3399]: I0413 20:20:38.266652 3399 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:20:38.269627 kubelet[3399]: E0413 20:20:38.269600 3399 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:20:38.370426 kubelet[3399]: E0413 20:20:38.370395 3399 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 20:20:38.440728 kubelet[3399]: I0413 20:20:38.440649 3399 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:20:38.440867 kubelet[3399]: I0413 20:20:38.440853 3399 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:20:38.440954 kubelet[3399]: I0413 20:20:38.440945 3399 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:20:38.441264 kubelet[3399]: I0413 20:20:38.441234 3399 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:20:38.441396 kubelet[3399]: I0413 20:20:38.441368 3399 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:20:38.441459 kubelet[3399]: I0413 20:20:38.441450 3399 policy_none.go:49] "None policy: Start" Apr 13 20:20:38.441581 kubelet[3399]: I0413 20:20:38.441520 3399 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:20:38.443152 kubelet[3399]: I0413 20:20:38.442214 3399 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:20:38.443152 kubelet[3399]: I0413 20:20:38.442382 3399 state_mem.go:75] "Updated machine memory state" Apr 13 20:20:38.451217 kubelet[3399]: E0413 20:20:38.450223 3399 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:20:38.452249 kubelet[3399]: I0413 20:20:38.452229 3399 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:20:38.459212 kubelet[3399]: I0413 20:20:38.456624 3399 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:20:38.461699 kubelet[3399]: I0413 20:20:38.461675 3399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:20:38.473178 kubelet[3399]: E0413 20:20:38.470357 3399 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:20:38.572378 kubelet[3399]: I0413 20:20:38.572220 3399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-175" Apr 13 20:20:38.573237 kubelet[3399]: I0413 20:20:38.572843 3399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:38.574483 kubelet[3399]: I0413 20:20:38.573179 3399 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:38.582707 kubelet[3399]: I0413 20:20:38.582193 3399 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-31-175" Apr 13 20:20:38.594308 kubelet[3399]: I0413 20:20:38.594278 3399 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-31-175" Apr 13 20:20:38.594607 kubelet[3399]: I0413 20:20:38.594591 3399 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-31-175" Apr 13 20:20:38.647078 kubelet[3399]: I0413 20:20:38.647028 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a62b26aeef80b28f6b8bb1587ae7b413-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-175\" (UID: \"a62b26aeef80b28f6b8bb1587ae7b413\") " pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:38.647078 kubelet[3399]: I0413 20:20:38.647082 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:38.647291 kubelet[3399]: I0413 20:20:38.647104 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:38.647291 kubelet[3399]: I0413 20:20:38.647145 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:38.647291 kubelet[3399]: I0413 20:20:38.647167 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:38.647291 kubelet[3399]: I0413 20:20:38.647192 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a62b26aeef80b28f6b8bb1587ae7b413-ca-certs\") pod \"kube-apiserver-ip-172-31-31-175\" (UID: \"a62b26aeef80b28f6b8bb1587ae7b413\") " pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:38.647291 kubelet[3399]: I0413 20:20:38.647213 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a62b26aeef80b28f6b8bb1587ae7b413-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-175\" (UID: \"a62b26aeef80b28f6b8bb1587ae7b413\") " pod="kube-system/kube-apiserver-ip-172-31-31-175" Apr 13 20:20:38.647675 kubelet[3399]: I0413 20:20:38.647234 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75c0dd3fffbb790a95cb2df6aebc7f02-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-175\" (UID: \"75c0dd3fffbb790a95cb2df6aebc7f02\") " pod="kube-system/kube-controller-manager-ip-172-31-31-175" Apr 13 20:20:38.647675 kubelet[3399]: I0413 20:20:38.647256 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/638d361d66525be915cda4c8ea871a8b-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-175\" (UID: \"638d361d66525be915cda4c8ea871a8b\") " pod="kube-system/kube-scheduler-ip-172-31-31-175" Apr 13 20:20:39.026439 kubelet[3399]: I0413 20:20:39.026366 3399 apiserver.go:52] "Watching apiserver" Apr 13 20:20:39.033506 kubelet[3399]: I0413 20:20:39.033471 3399 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:20:39.074256 kubelet[3399]: I0413 20:20:39.074065 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-175" podStartSLOduration=1.074044956 podStartE2EDuration="1.074044956s" podCreationTimestamp="2026-04-13 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:39.071062557 +0000 UTC m=+1.319986363" watchObservedRunningTime="2026-04-13 20:20:39.074044956 +0000 UTC m=+1.322968768" Apr 13 20:20:39.104065 kubelet[3399]: I0413 20:20:39.102669 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-175" podStartSLOduration=1.102645392 podStartE2EDuration="1.102645392s" podCreationTimestamp="2026-04-13 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:39.089984378 +0000 UTC m=+1.338908194" watchObservedRunningTime="2026-04-13 20:20:39.102645392 +0000 UTC m=+1.351569209" Apr 13 20:20:39.117711 kubelet[3399]: I0413 20:20:39.117166 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-175" podStartSLOduration=1.117143512 podStartE2EDuration="1.117143512s" podCreationTimestamp="2026-04-13 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:39.103256306 +0000 UTC m=+1.352180122" watchObservedRunningTime="2026-04-13 20:20:39.117143512 +0000 UTC m=+1.366067325" Apr 13 20:20:39.187668 sudo[3505]: pam_unix(sudo:session): session closed for user root Apr 13 20:20:41.856192 sudo[2480]: pam_unix(sudo:session): session closed for user root Apr 13 20:20:42.027067 sshd[2463]: pam_unix(sshd:session): session closed for user core Apr 13 20:20:42.030658 systemd[1]: sshd@6-172.31.31.175:22-50.85.169.122:36568.service: Deactivated successfully. Apr 13 20:20:42.037875 systemd-logind[2083]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:20:42.040093 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:20:42.044443 systemd-logind[2083]: Removed session 7. Apr 13 20:20:43.426309 kubelet[3399]: I0413 20:20:43.425823 3399 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:20:43.426848 containerd[2119]: time="2026-04-13T20:20:43.426221500Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:20:43.428385 kubelet[3399]: I0413 20:20:43.427519 3399 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:20:44.486586 kubelet[3399]: I0413 20:20:44.486542 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-clustermesh-secrets\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487141 kubelet[3399]: I0413 20:20:44.486596 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9a912cd7-8d2b-4350-a54c-10b97ce63bde-kube-proxy\") pod \"kube-proxy-rn727\" (UID: \"9a912cd7-8d2b-4350-a54c-10b97ce63bde\") " pod="kube-system/kube-proxy-rn727" Apr 13 20:20:44.487141 kubelet[3399]: I0413 20:20:44.486623 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnv8w\" (UniqueName: \"kubernetes.io/projected/9a912cd7-8d2b-4350-a54c-10b97ce63bde-kube-api-access-hnv8w\") pod \"kube-proxy-rn727\" (UID: \"9a912cd7-8d2b-4350-a54c-10b97ce63bde\") " pod="kube-system/kube-proxy-rn727" Apr 13 20:20:44.487141 kubelet[3399]: I0413 20:20:44.486649 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-net\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487141 kubelet[3399]: I0413 20:20:44.486672 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-kernel\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487141 kubelet[3399]: I0413 20:20:44.486701 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a912cd7-8d2b-4350-a54c-10b97ce63bde-lib-modules\") pod \"kube-proxy-rn727\" (UID: \"9a912cd7-8d2b-4350-a54c-10b97ce63bde\") " pod="kube-system/kube-proxy-rn727" Apr 13 20:20:44.487352 kubelet[3399]: I0413 20:20:44.486731 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-lib-modules\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487352 kubelet[3399]: I0413 20:20:44.486755 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-config-path\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487352 kubelet[3399]: I0413 20:20:44.486778 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hubble-tls\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487352 kubelet[3399]: I0413 20:20:44.486800 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a912cd7-8d2b-4350-a54c-10b97ce63bde-xtables-lock\") pod \"kube-proxy-rn727\" (UID: \"9a912cd7-8d2b-4350-a54c-10b97ce63bde\") " pod="kube-system/kube-proxy-rn727" Apr 13 20:20:44.487352 kubelet[3399]: I0413 20:20:44.486824 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-run\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487352 kubelet[3399]: I0413 20:20:44.486848 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-bpf-maps\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487508 kubelet[3399]: I0413 20:20:44.486872 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-cgroup\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487508 kubelet[3399]: I0413 20:20:44.486896 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cni-path\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487508 kubelet[3399]: I0413 20:20:44.486918 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-etc-cni-netd\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487508 kubelet[3399]: I0413 20:20:44.486941 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-xtables-lock\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487508 kubelet[3399]: I0413 20:20:44.486961 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flfpp\" (UniqueName: \"kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-kube-api-access-flfpp\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.487508 kubelet[3399]: I0413 20:20:44.486988 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hostproc\") pod \"cilium-p9m54\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " pod="kube-system/cilium-p9m54" Apr 13 20:20:44.770858 containerd[2119]: time="2026-04-13T20:20:44.770598719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rn727,Uid:9a912cd7-8d2b-4350-a54c-10b97ce63bde,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:44.770858 containerd[2119]: time="2026-04-13T20:20:44.770598807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9m54,Uid:c63cf53e-22be-4e04-ba63-11d1ae9dc6e5,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:44.790706 kubelet[3399]: I0413 20:20:44.790662 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b862b\" (UniqueName: \"kubernetes.io/projected/6625277c-f079-4caf-8eaf-c934dce6d71d-kube-api-access-b862b\") pod \"cilium-operator-6c4d7847fc-fw89g\" (UID: \"6625277c-f079-4caf-8eaf-c934dce6d71d\") " pod="kube-system/cilium-operator-6c4d7847fc-fw89g" Apr 13 20:20:44.790849 kubelet[3399]: I0413 20:20:44.790714 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6625277c-f079-4caf-8eaf-c934dce6d71d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fw89g\" (UID: \"6625277c-f079-4caf-8eaf-c934dce6d71d\") " pod="kube-system/cilium-operator-6c4d7847fc-fw89g" Apr 13 20:20:44.845687 containerd[2119]: time="2026-04-13T20:20:44.845328243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:44.845687 containerd[2119]: time="2026-04-13T20:20:44.845418334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:44.845687 containerd[2119]: time="2026-04-13T20:20:44.845442349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:44.846744 containerd[2119]: time="2026-04-13T20:20:44.846112685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:44.849133 containerd[2119]: time="2026-04-13T20:20:44.848704149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:44.849133 containerd[2119]: time="2026-04-13T20:20:44.848784615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:44.849133 containerd[2119]: time="2026-04-13T20:20:44.848829115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:44.851441 containerd[2119]: time="2026-04-13T20:20:44.851200364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:44.936348 containerd[2119]: time="2026-04-13T20:20:44.935529454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rn727,Uid:9a912cd7-8d2b-4350-a54c-10b97ce63bde,Namespace:kube-system,Attempt:0,} returns sandbox id \"2123492c872a7beb9fa1596a102b1f252b2a74cf289a6a2837621e3a4f7a6091\"" Apr 13 20:20:44.939375 containerd[2119]: time="2026-04-13T20:20:44.939326242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p9m54,Uid:c63cf53e-22be-4e04-ba63-11d1ae9dc6e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\"" Apr 13 20:20:44.945401 containerd[2119]: time="2026-04-13T20:20:44.944981853Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 13 20:20:44.951441 containerd[2119]: time="2026-04-13T20:20:44.951392343Z" level=info msg="CreateContainer within sandbox \"2123492c872a7beb9fa1596a102b1f252b2a74cf289a6a2837621e3a4f7a6091\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:20:44.986619 containerd[2119]: time="2026-04-13T20:20:44.986562213Z" level=info msg="CreateContainer within sandbox \"2123492c872a7beb9fa1596a102b1f252b2a74cf289a6a2837621e3a4f7a6091\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"091aee51337d81c078a6fbbd2ca1956cbebee1d1aa3359895e881b3e352b520d\"" Apr 13 20:20:44.988468 containerd[2119]: time="2026-04-13T20:20:44.987324333Z" level=info msg="StartContainer for \"091aee51337d81c078a6fbbd2ca1956cbebee1d1aa3359895e881b3e352b520d\"" Apr 13 20:20:45.012229 containerd[2119]: time="2026-04-13T20:20:45.012183761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fw89g,Uid:6625277c-f079-4caf-8eaf-c934dce6d71d,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:45.061841 containerd[2119]: time="2026-04-13T20:20:45.061300005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:20:45.061841 containerd[2119]: time="2026-04-13T20:20:45.061427578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:20:45.061841 containerd[2119]: time="2026-04-13T20:20:45.061460472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:45.061841 containerd[2119]: time="2026-04-13T20:20:45.061588662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:20:45.089894 containerd[2119]: time="2026-04-13T20:20:45.089589091Z" level=info msg="StartContainer for \"091aee51337d81c078a6fbbd2ca1956cbebee1d1aa3359895e881b3e352b520d\" returns successfully" Apr 13 20:20:45.150632 containerd[2119]: time="2026-04-13T20:20:45.150570685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fw89g,Uid:6625277c-f079-4caf-8eaf-c934dce6d71d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\"" Apr 13 20:20:46.927660 kubelet[3399]: I0413 20:20:46.927235 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rn727" podStartSLOduration=2.9272142309999998 podStartE2EDuration="2.927214231s" podCreationTimestamp="2026-04-13 20:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:20:45.346670284 +0000 UTC m=+7.595594100" watchObservedRunningTime="2026-04-13 20:20:46.927214231 +0000 UTC m=+9.176138046" Apr 13 20:20:49.623571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2930702148.mount: Deactivated successfully. Apr 13 20:20:52.442114 containerd[2119]: time="2026-04-13T20:20:52.442053510Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:52.442822 containerd[2119]: time="2026-04-13T20:20:52.442746832Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 13 20:20:52.445634 containerd[2119]: time="2026-04-13T20:20:52.445306927Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:52.447100 containerd[2119]: time="2026-04-13T20:20:52.447054561Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.502024464s" Apr 13 20:20:52.447353 containerd[2119]: time="2026-04-13T20:20:52.447107146Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 13 20:20:52.449055 containerd[2119]: time="2026-04-13T20:20:52.449023457Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 13 20:20:52.455759 containerd[2119]: time="2026-04-13T20:20:52.455721419Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 20:20:52.522703 containerd[2119]: time="2026-04-13T20:20:52.522651849Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\"" Apr 13 20:20:52.525071 containerd[2119]: time="2026-04-13T20:20:52.525001973Z" level=info msg="StartContainer for \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\"" Apr 13 20:20:52.769554 containerd[2119]: time="2026-04-13T20:20:52.768433746Z" level=info msg="StartContainer for \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\" returns successfully" Apr 13 20:20:52.844875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1-rootfs.mount: Deactivated successfully. Apr 13 20:20:52.852938 containerd[2119]: time="2026-04-13T20:20:52.850314861Z" level=info msg="shim disconnected" id=9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1 namespace=k8s.io Apr 13 20:20:52.853359 containerd[2119]: time="2026-04-13T20:20:52.852963552Z" level=warning msg="cleaning up after shim disconnected" id=9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1 namespace=k8s.io Apr 13 20:20:52.853359 containerd[2119]: time="2026-04-13T20:20:52.852985975Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:53.369351 containerd[2119]: time="2026-04-13T20:20:53.369272089Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 20:20:53.395791 containerd[2119]: time="2026-04-13T20:20:53.395732953Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\"" Apr 13 20:20:53.397990 containerd[2119]: time="2026-04-13T20:20:53.396702010Z" level=info msg="StartContainer for \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\"" Apr 13 20:20:53.469439 containerd[2119]: time="2026-04-13T20:20:53.469391488Z" level=info msg="StartContainer for \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\" returns successfully" Apr 13 20:20:53.484245 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:20:53.484706 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:20:53.484794 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:20:53.496008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:20:53.540283 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:20:53.564434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8-rootfs.mount: Deactivated successfully. Apr 13 20:20:53.575792 containerd[2119]: time="2026-04-13T20:20:53.575725068Z" level=info msg="shim disconnected" id=15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8 namespace=k8s.io Apr 13 20:20:53.575792 containerd[2119]: time="2026-04-13T20:20:53.575792714Z" level=warning msg="cleaning up after shim disconnected" id=15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8 namespace=k8s.io Apr 13 20:20:53.576080 containerd[2119]: time="2026-04-13T20:20:53.575804441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:53.609924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559225090.mount: Deactivated successfully. Apr 13 20:20:54.371261 containerd[2119]: time="2026-04-13T20:20:54.371187989Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 20:20:54.385151 containerd[2119]: time="2026-04-13T20:20:54.381660772Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:54.386905 containerd[2119]: time="2026-04-13T20:20:54.386809363Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 13 20:20:54.388766 containerd[2119]: time="2026-04-13T20:20:54.388705619Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:20:54.392086 containerd[2119]: time="2026-04-13T20:20:54.392044223Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.94255781s" Apr 13 20:20:54.393142 containerd[2119]: time="2026-04-13T20:20:54.393073473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 13 20:20:54.402908 containerd[2119]: time="2026-04-13T20:20:54.402874455Z" level=info msg="CreateContainer within sandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 13 20:20:54.415371 containerd[2119]: time="2026-04-13T20:20:54.415329114Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\"" Apr 13 20:20:54.417344 containerd[2119]: time="2026-04-13T20:20:54.416494307Z" level=info msg="StartContainer for \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\"" Apr 13 20:20:54.426489 containerd[2119]: time="2026-04-13T20:20:54.426444984Z" level=info msg="CreateContainer within sandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\"" Apr 13 20:20:54.427825 containerd[2119]: time="2026-04-13T20:20:54.427792149Z" level=info msg="StartContainer for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\"" Apr 13 20:20:54.523445 containerd[2119]: time="2026-04-13T20:20:54.522859909Z" level=info msg="StartContainer for \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\" returns successfully" Apr 13 20:20:54.525651 containerd[2119]: time="2026-04-13T20:20:54.524231515Z" level=info msg="StartContainer for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" returns successfully" Apr 13 20:20:54.593401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e-rootfs.mount: Deactivated successfully. Apr 13 20:20:55.055334 containerd[2119]: time="2026-04-13T20:20:55.055167878Z" level=info msg="shim disconnected" id=a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e namespace=k8s.io Apr 13 20:20:55.055334 containerd[2119]: time="2026-04-13T20:20:55.055332992Z" level=warning msg="cleaning up after shim disconnected" id=a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e namespace=k8s.io Apr 13 20:20:55.055638 containerd[2119]: time="2026-04-13T20:20:55.055344902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:55.388082 containerd[2119]: time="2026-04-13T20:20:55.387363352Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 20:20:55.442316 containerd[2119]: time="2026-04-13T20:20:55.442169356Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\"" Apr 13 20:20:55.444364 containerd[2119]: time="2026-04-13T20:20:55.444322730Z" level=info msg="StartContainer for \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\"" Apr 13 20:20:55.495897 systemd-journald[1574]: Under memory pressure, flushing caches. Apr 13 20:20:55.487219 systemd-resolved[1991]: Under memory pressure, flushing caches. Apr 13 20:20:55.487290 systemd-resolved[1991]: Flushed all caches. Apr 13 20:20:55.550586 systemd[1]: run-containerd-runc-k8s.io-55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb-runc.FlIAMt.mount: Deactivated successfully. Apr 13 20:20:55.619563 kubelet[3399]: I0413 20:20:55.608928 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fw89g" podStartSLOduration=2.365163879 podStartE2EDuration="11.608899524s" podCreationTimestamp="2026-04-13 20:20:44 +0000 UTC" firstStartedPulling="2026-04-13 20:20:45.152545197 +0000 UTC m=+7.401468991" lastFinishedPulling="2026-04-13 20:20:54.396280831 +0000 UTC m=+16.645204636" observedRunningTime="2026-04-13 20:20:55.446998295 +0000 UTC m=+17.695922110" watchObservedRunningTime="2026-04-13 20:20:55.608899524 +0000 UTC m=+17.857823339" Apr 13 20:20:55.734668 containerd[2119]: time="2026-04-13T20:20:55.734560484Z" level=info msg="StartContainer for \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\" returns successfully" Apr 13 20:20:55.813714 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb-rootfs.mount: Deactivated successfully. Apr 13 20:20:55.836142 containerd[2119]: time="2026-04-13T20:20:55.834497251Z" level=info msg="shim disconnected" id=55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb namespace=k8s.io Apr 13 20:20:55.836142 containerd[2119]: time="2026-04-13T20:20:55.834558619Z" level=warning msg="cleaning up after shim disconnected" id=55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb namespace=k8s.io Apr 13 20:20:55.836142 containerd[2119]: time="2026-04-13T20:20:55.834571081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:20:55.888404 containerd[2119]: time="2026-04-13T20:20:55.888320337Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:20:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:20:56.411935 containerd[2119]: time="2026-04-13T20:20:56.411746714Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 20:20:56.435909 containerd[2119]: time="2026-04-13T20:20:56.435860474Z" level=info msg="CreateContainer within sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\"" Apr 13 20:20:56.436856 containerd[2119]: time="2026-04-13T20:20:56.436822669Z" level=info msg="StartContainer for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\"" Apr 13 20:20:56.510294 containerd[2119]: time="2026-04-13T20:20:56.510163457Z" level=info msg="StartContainer for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" returns successfully" Apr 13 20:20:56.640226 systemd[1]: run-containerd-runc-k8s.io-e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb-runc.y4EskC.mount: Deactivated successfully. Apr 13 20:20:56.812821 kubelet[3399]: I0413 20:20:56.812456 3399 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:20:56.901598 kubelet[3399]: I0413 20:20:56.901556 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfkpk\" (UniqueName: \"kubernetes.io/projected/de37708b-74e1-4cdc-ad7f-c82144456ee8-kube-api-access-vfkpk\") pod \"coredns-674b8bbfcf-59lcq\" (UID: \"de37708b-74e1-4cdc-ad7f-c82144456ee8\") " pod="kube-system/coredns-674b8bbfcf-59lcq" Apr 13 20:20:56.901791 kubelet[3399]: I0413 20:20:56.901742 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de37708b-74e1-4cdc-ad7f-c82144456ee8-config-volume\") pod \"coredns-674b8bbfcf-59lcq\" (UID: \"de37708b-74e1-4cdc-ad7f-c82144456ee8\") " pod="kube-system/coredns-674b8bbfcf-59lcq" Apr 13 20:20:56.901791 kubelet[3399]: I0413 20:20:56.901788 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfvw\" (UniqueName: \"kubernetes.io/projected/243bd239-b852-48ac-8a42-485290c6247b-kube-api-access-8qfvw\") pod \"coredns-674b8bbfcf-l7f9d\" (UID: \"243bd239-b852-48ac-8a42-485290c6247b\") " pod="kube-system/coredns-674b8bbfcf-l7f9d" Apr 13 20:20:56.901898 kubelet[3399]: I0413 20:20:56.901816 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/243bd239-b852-48ac-8a42-485290c6247b-config-volume\") pod \"coredns-674b8bbfcf-l7f9d\" (UID: \"243bd239-b852-48ac-8a42-485290c6247b\") " pod="kube-system/coredns-674b8bbfcf-l7f9d" Apr 13 20:20:57.173683 containerd[2119]: time="2026-04-13T20:20:57.173557253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-59lcq,Uid:de37708b-74e1-4cdc-ad7f-c82144456ee8,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:57.176443 containerd[2119]: time="2026-04-13T20:20:57.176395409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l7f9d,Uid:243bd239-b852-48ac-8a42-485290c6247b,Namespace:kube-system,Attempt:0,}" Apr 13 20:20:57.417315 kubelet[3399]: I0413 20:20:57.417103 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p9m54" podStartSLOduration=5.912091488 podStartE2EDuration="13.41708236s" podCreationTimestamp="2026-04-13 20:20:44 +0000 UTC" firstStartedPulling="2026-04-13 20:20:44.943656408 +0000 UTC m=+7.192580222" lastFinishedPulling="2026-04-13 20:20:52.448647285 +0000 UTC m=+14.697571094" observedRunningTime="2026-04-13 20:20:57.416605368 +0000 UTC m=+19.665529184" watchObservedRunningTime="2026-04-13 20:20:57.41708236 +0000 UTC m=+19.666006176" Apr 13 20:20:59.845726 (udev-worker)[4375]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:20:59.846331 systemd-networkd[1658]: cilium_host: Link UP Apr 13 20:20:59.846491 systemd-networkd[1658]: cilium_net: Link UP Apr 13 20:20:59.846698 systemd-networkd[1658]: cilium_net: Gained carrier Apr 13 20:20:59.846905 systemd-networkd[1658]: cilium_host: Gained carrier Apr 13 20:20:59.852216 (udev-worker)[4408]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:21:00.174952 (udev-worker)[4420]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:21:00.192049 systemd-networkd[1658]: cilium_vxlan: Link UP Apr 13 20:21:00.192061 systemd-networkd[1658]: cilium_vxlan: Gained carrier Apr 13 20:21:00.340396 systemd-networkd[1658]: cilium_net: Gained IPv6LL Apr 13 20:21:00.796327 systemd-networkd[1658]: cilium_host: Gained IPv6LL Apr 13 20:21:01.605519 kernel: NET: Registered PF_ALG protocol family Apr 13 20:21:01.696858 systemd-networkd[1658]: cilium_vxlan: Gained IPv6LL Apr 13 20:21:04.464497 ntpd[2067]: Listen normally on 6 cilium_host 192.168.0.89:123 Apr 13 20:21:04.465279 ntpd[2067]: 13 Apr 20:21:04 ntpd[2067]: Listen normally on 6 cilium_host 192.168.0.89:123 Apr 13 20:21:04.465279 ntpd[2067]: 13 Apr 20:21:04 ntpd[2067]: Listen normally on 7 cilium_net [fe80::107e:30ff:feb0:a1a4%4]:123 Apr 13 20:21:04.465279 ntpd[2067]: 13 Apr 20:21:04 ntpd[2067]: Listen normally on 8 cilium_host [fe80::d479:cbff:fe03:f8c9%5]:123 Apr 13 20:21:04.465279 ntpd[2067]: 13 Apr 20:21:04 ntpd[2067]: Listen normally on 9 cilium_vxlan [fe80::7c93:d5ff:fe51:1a38%6]:123 Apr 13 20:21:04.464588 ntpd[2067]: Listen normally on 7 cilium_net [fe80::107e:30ff:feb0:a1a4%4]:123 Apr 13 20:21:04.464647 ntpd[2067]: Listen normally on 8 cilium_host [fe80::d479:cbff:fe03:f8c9%5]:123 Apr 13 20:21:04.464687 ntpd[2067]: Listen normally on 9 cilium_vxlan [fe80::7c93:d5ff:fe51:1a38%6]:123 Apr 13 20:21:05.545673 (udev-worker)[4670]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:21:05.548567 systemd-networkd[1658]: lxc_health: Link UP Apr 13 20:21:05.557665 (udev-worker)[4739]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:21:05.557710 systemd-networkd[1658]: lxc_health: Gained carrier Apr 13 20:21:05.835740 systemd-networkd[1658]: lxc5d91a525a6d0: Link UP Apr 13 20:21:05.843185 kernel: eth0: renamed from tmp9e99c Apr 13 20:21:05.857788 systemd-networkd[1658]: lxc5d91a525a6d0: Gained carrier Apr 13 20:21:05.877726 systemd-networkd[1658]: lxc0b3feb91ec57: Link UP Apr 13 20:21:05.888531 kernel: eth0: renamed from tmpf3459 Apr 13 20:21:05.895900 (udev-worker)[4749]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:21:05.898667 systemd-networkd[1658]: lxc0b3feb91ec57: Gained carrier Apr 13 20:21:06.944241 systemd-networkd[1658]: lxc5d91a525a6d0: Gained IPv6LL Apr 13 20:21:07.132372 systemd-networkd[1658]: lxc0b3feb91ec57: Gained IPv6LL Apr 13 20:21:07.580535 systemd-networkd[1658]: lxc_health: Gained IPv6LL Apr 13 20:21:09.599051 kubelet[3399]: I0413 20:21:09.598244 3399 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:21:10.464519 ntpd[2067]: Listen normally on 10 lxc_health [fe80::e884:56ff:fee2:e59c%8]:123 Apr 13 20:21:10.465660 ntpd[2067]: 13 Apr 20:21:10 ntpd[2067]: Listen normally on 10 lxc_health [fe80::e884:56ff:fee2:e59c%8]:123 Apr 13 20:21:10.465660 ntpd[2067]: 13 Apr 20:21:10 ntpd[2067]: Listen normally on 11 lxc5d91a525a6d0 [fe80::7488:e8ff:fed5:a8fa%10]:123 Apr 13 20:21:10.465660 ntpd[2067]: 13 Apr 20:21:10 ntpd[2067]: Listen normally on 12 lxc0b3feb91ec57 [fe80::8ce1:6bff:febe:fa%12]:123 Apr 13 20:21:10.464733 ntpd[2067]: Listen normally on 11 lxc5d91a525a6d0 [fe80::7488:e8ff:fed5:a8fa%10]:123 Apr 13 20:21:10.464786 ntpd[2067]: Listen normally on 12 lxc0b3feb91ec57 [fe80::8ce1:6bff:febe:fa%12]:123 Apr 13 20:21:10.888494 containerd[2119]: time="2026-04-13T20:21:10.886649915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:21:10.888494 containerd[2119]: time="2026-04-13T20:21:10.886730630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:21:10.888494 containerd[2119]: time="2026-04-13T20:21:10.886755864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:21:10.888494 containerd[2119]: time="2026-04-13T20:21:10.887717964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:21:10.905400 containerd[2119]: time="2026-04-13T20:21:10.903515527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:21:10.905400 containerd[2119]: time="2026-04-13T20:21:10.903602929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:21:10.905400 containerd[2119]: time="2026-04-13T20:21:10.903626707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:21:10.905400 containerd[2119]: time="2026-04-13T20:21:10.903743228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:21:11.091892 containerd[2119]: time="2026-04-13T20:21:11.091768986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-59lcq,Uid:de37708b-74e1-4cdc-ad7f-c82144456ee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"f345913541db5d0a655d01adb3e4e6ee1be9d6dbe058a1068eadfbd4ecca8e11\"" Apr 13 20:21:11.108388 containerd[2119]: time="2026-04-13T20:21:11.108340467Z" level=info msg="CreateContainer within sandbox \"f345913541db5d0a655d01adb3e4e6ee1be9d6dbe058a1068eadfbd4ecca8e11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:21:11.156319 containerd[2119]: time="2026-04-13T20:21:11.155383252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l7f9d,Uid:243bd239-b852-48ac-8a42-485290c6247b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e99c755379cada749f5e308ee931b5a010a3b0cd7e61fd6be048afc7e3221ea\"" Apr 13 20:21:11.189331 containerd[2119]: time="2026-04-13T20:21:11.188460335Z" level=info msg="CreateContainer within sandbox \"9e99c755379cada749f5e308ee931b5a010a3b0cd7e61fd6be048afc7e3221ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:21:11.193356 containerd[2119]: time="2026-04-13T20:21:11.193293613Z" level=info msg="CreateContainer within sandbox \"f345913541db5d0a655d01adb3e4e6ee1be9d6dbe058a1068eadfbd4ecca8e11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2971810bc5a524ae75aefcc5d24dbbe10f6063beea653661e614e1bf53a67e4\"" Apr 13 20:21:11.195643 containerd[2119]: time="2026-04-13T20:21:11.195612886Z" level=info msg="StartContainer for \"b2971810bc5a524ae75aefcc5d24dbbe10f6063beea653661e614e1bf53a67e4\"" Apr 13 20:21:11.213944 containerd[2119]: time="2026-04-13T20:21:11.213859685Z" level=info msg="CreateContainer within sandbox \"9e99c755379cada749f5e308ee931b5a010a3b0cd7e61fd6be048afc7e3221ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61de5619794e3f10c8d13fa784d9a8e9433e6166ac5e87ddf3d2e8a71f21d0af\"" Apr 13 20:21:11.216240 containerd[2119]: time="2026-04-13T20:21:11.216077762Z" level=info msg="StartContainer for \"61de5619794e3f10c8d13fa784d9a8e9433e6166ac5e87ddf3d2e8a71f21d0af\"" Apr 13 20:21:11.309836 containerd[2119]: time="2026-04-13T20:21:11.308298957Z" level=info msg="StartContainer for \"b2971810bc5a524ae75aefcc5d24dbbe10f6063beea653661e614e1bf53a67e4\" returns successfully" Apr 13 20:21:11.335096 containerd[2119]: time="2026-04-13T20:21:11.334519340Z" level=info msg="StartContainer for \"61de5619794e3f10c8d13fa784d9a8e9433e6166ac5e87ddf3d2e8a71f21d0af\" returns successfully" Apr 13 20:21:11.471475 kubelet[3399]: I0413 20:21:11.471278 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l7f9d" podStartSLOduration=27.4712625 podStartE2EDuration="27.4712625s" podCreationTimestamp="2026-04-13 20:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:21:11.469287639 +0000 UTC m=+33.718211449" watchObservedRunningTime="2026-04-13 20:21:11.4712625 +0000 UTC m=+33.720186315" Apr 13 20:21:11.487105 kubelet[3399]: I0413 20:21:11.487029 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-59lcq" podStartSLOduration=27.487005755 podStartE2EDuration="27.487005755s" podCreationTimestamp="2026-04-13 20:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:21:11.486215929 +0000 UTC m=+33.735139744" watchObservedRunningTime="2026-04-13 20:21:11.487005755 +0000 UTC m=+33.735929570" Apr 13 20:21:11.899612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1358376299.mount: Deactivated successfully. Apr 13 20:21:24.994563 systemd[1]: Started sshd@7-172.31.31.175:22-50.85.169.122:43986.service - OpenSSH per-connection server daemon (50.85.169.122:43986). Apr 13 20:21:25.974829 sshd[4956]: Accepted publickey for core from 50.85.169.122 port 43986 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:25.976298 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:25.987666 systemd-logind[2083]: New session 8 of user core. Apr 13 20:21:25.991703 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:21:27.451966 sshd[4956]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:27.457450 systemd[1]: sshd@7-172.31.31.175:22-50.85.169.122:43986.service: Deactivated successfully. Apr 13 20:21:27.463823 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:21:27.463889 systemd-logind[2083]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:21:27.467889 systemd-logind[2083]: Removed session 8. Apr 13 20:21:32.628080 systemd[1]: Started sshd@8-172.31.31.175:22-50.85.169.122:46946.service - OpenSSH per-connection server daemon (50.85.169.122:46946). Apr 13 20:21:33.627288 sshd[4970]: Accepted publickey for core from 50.85.169.122 port 46946 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:33.628905 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:33.634823 systemd-logind[2083]: New session 9 of user core. Apr 13 20:21:33.641975 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:21:34.405069 sshd[4970]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:34.411859 systemd[1]: sshd@8-172.31.31.175:22-50.85.169.122:46946.service: Deactivated successfully. Apr 13 20:21:34.412411 systemd-logind[2083]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:21:34.417466 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:21:34.418901 systemd-logind[2083]: Removed session 9. Apr 13 20:21:39.559765 systemd[1]: Started sshd@9-172.31.31.175:22-50.85.169.122:46954.service - OpenSSH per-connection server daemon (50.85.169.122:46954). Apr 13 20:21:40.519171 sshd[4987]: Accepted publickey for core from 50.85.169.122 port 46954 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:40.519882 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:40.526607 systemd-logind[2083]: New session 10 of user core. Apr 13 20:21:40.533630 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:21:41.268476 sshd[4987]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:41.275383 systemd-logind[2083]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:21:41.275766 systemd[1]: sshd@9-172.31.31.175:22-50.85.169.122:46954.service: Deactivated successfully. Apr 13 20:21:41.281635 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:21:41.283011 systemd-logind[2083]: Removed session 10. Apr 13 20:21:41.433563 systemd[1]: Started sshd@10-172.31.31.175:22-50.85.169.122:39008.service - OpenSSH per-connection server daemon (50.85.169.122:39008). Apr 13 20:21:42.422906 sshd[5001]: Accepted publickey for core from 50.85.169.122 port 39008 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:42.423639 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:42.489766 systemd-logind[2083]: New session 11 of user core. Apr 13 20:21:42.510585 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:21:43.280448 sshd[5001]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:43.285424 systemd[1]: sshd@10-172.31.31.175:22-50.85.169.122:39008.service: Deactivated successfully. Apr 13 20:21:43.293830 systemd-logind[2083]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:21:43.296426 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:21:43.300949 systemd-logind[2083]: Removed session 11. Apr 13 20:21:43.466129 systemd[1]: Started sshd@11-172.31.31.175:22-50.85.169.122:39012.service - OpenSSH per-connection server daemon (50.85.169.122:39012). Apr 13 20:21:44.498069 sshd[5013]: Accepted publickey for core from 50.85.169.122 port 39012 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:44.498762 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:44.504573 systemd-logind[2083]: New session 12 of user core. Apr 13 20:21:44.510581 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:21:45.289764 sshd[5013]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:45.295663 systemd[1]: sshd@11-172.31.31.175:22-50.85.169.122:39012.service: Deactivated successfully. Apr 13 20:21:45.296664 systemd-logind[2083]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:21:45.300733 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:21:45.302108 systemd-logind[2083]: Removed session 12. Apr 13 20:21:50.459988 systemd[1]: Started sshd@12-172.31.31.175:22-50.85.169.122:56256.service - OpenSSH per-connection server daemon (50.85.169.122:56256). Apr 13 20:21:51.455480 sshd[5029]: Accepted publickey for core from 50.85.169.122 port 56256 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:51.457391 sshd[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:51.462987 systemd-logind[2083]: New session 13 of user core. Apr 13 20:21:51.469547 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:21:52.231235 sshd[5029]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:52.235604 systemd[1]: sshd@12-172.31.31.175:22-50.85.169.122:56256.service: Deactivated successfully. Apr 13 20:21:52.241257 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:21:52.241493 systemd-logind[2083]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:21:52.243520 systemd-logind[2083]: Removed session 13. Apr 13 20:21:52.408784 systemd[1]: Started sshd@13-172.31.31.175:22-50.85.169.122:56266.service - OpenSSH per-connection server daemon (50.85.169.122:56266). Apr 13 20:21:53.428630 sshd[5043]: Accepted publickey for core from 50.85.169.122 port 56266 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:53.435505 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:53.441612 systemd-logind[2083]: New session 14 of user core. Apr 13 20:21:53.450693 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:21:55.262405 sshd[5043]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:55.269746 systemd[1]: sshd@13-172.31.31.175:22-50.85.169.122:56266.service: Deactivated successfully. Apr 13 20:21:55.275310 systemd-logind[2083]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:21:55.275360 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:21:55.277977 systemd-logind[2083]: Removed session 14. Apr 13 20:21:55.423553 systemd[1]: Started sshd@14-172.31.31.175:22-50.85.169.122:56270.service - OpenSSH per-connection server daemon (50.85.169.122:56270). Apr 13 20:21:56.424242 sshd[5055]: Accepted publickey for core from 50.85.169.122 port 56270 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:56.427291 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:56.433323 systemd-logind[2083]: New session 15 of user core. Apr 13 20:21:56.439016 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:21:57.929734 sshd[5055]: pam_unix(sshd:session): session closed for user core Apr 13 20:21:57.937243 systemd[1]: sshd@14-172.31.31.175:22-50.85.169.122:56270.service: Deactivated successfully. Apr 13 20:21:57.943297 systemd-logind[2083]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:21:57.945567 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:21:57.947769 systemd-logind[2083]: Removed session 15. Apr 13 20:21:58.106847 systemd[1]: Started sshd@15-172.31.31.175:22-50.85.169.122:56272.service - OpenSSH per-connection server daemon (50.85.169.122:56272). Apr 13 20:21:59.147084 sshd[5075]: Accepted publickey for core from 50.85.169.122 port 56272 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:21:59.150818 sshd[5075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:21:59.160239 systemd-logind[2083]: New session 16 of user core. Apr 13 20:21:59.165843 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:22:00.105432 sshd[5075]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:00.140532 systemd[1]: sshd@15-172.31.31.175:22-50.85.169.122:56272.service: Deactivated successfully. Apr 13 20:22:00.161067 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:22:00.162924 systemd-logind[2083]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:22:00.170920 systemd-logind[2083]: Removed session 16. Apr 13 20:22:00.244106 systemd[1]: Started sshd@16-172.31.31.175:22-50.85.169.122:46376.service - OpenSSH per-connection server daemon (50.85.169.122:46376). Apr 13 20:22:01.202466 sshd[5087]: Accepted publickey for core from 50.85.169.122 port 46376 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:01.203509 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:01.218089 systemd-logind[2083]: New session 17 of user core. Apr 13 20:22:01.225857 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:22:02.733487 sshd[5087]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:02.915057 systemd[1]: sshd@16-172.31.31.175:22-50.85.169.122:46376.service: Deactivated successfully. Apr 13 20:22:02.974538 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:22:02.979297 systemd-logind[2083]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:22:02.983972 systemd-logind[2083]: Removed session 17. Apr 13 20:22:07.859581 systemd[1]: Started sshd@17-172.31.31.175:22-50.85.169.122:46380.service - OpenSSH per-connection server daemon (50.85.169.122:46380). Apr 13 20:22:08.846624 sshd[5105]: Accepted publickey for core from 50.85.169.122 port 46380 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:08.848187 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:08.854639 systemd-logind[2083]: New session 18 of user core. Apr 13 20:22:08.859837 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:22:09.595939 sshd[5105]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:09.600500 systemd-logind[2083]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:22:09.603766 systemd[1]: sshd@17-172.31.31.175:22-50.85.169.122:46380.service: Deactivated successfully. Apr 13 20:22:09.608880 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:22:09.610420 systemd-logind[2083]: Removed session 18. Apr 13 20:22:14.771088 systemd[1]: Started sshd@18-172.31.31.175:22-50.85.169.122:56370.service - OpenSSH per-connection server daemon (50.85.169.122:56370). Apr 13 20:22:15.830267 sshd[5120]: Accepted publickey for core from 50.85.169.122 port 56370 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:15.833783 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:15.844300 systemd-logind[2083]: New session 19 of user core. Apr 13 20:22:15.851633 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:22:16.636776 sshd[5120]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:16.641314 systemd[1]: sshd@18-172.31.31.175:22-50.85.169.122:56370.service: Deactivated successfully. Apr 13 20:22:16.646649 systemd-logind[2083]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:22:16.648457 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:22:16.649760 systemd-logind[2083]: Removed session 19. Apr 13 20:22:16.808553 systemd[1]: Started sshd@19-172.31.31.175:22-50.85.169.122:56372.service - OpenSSH per-connection server daemon (50.85.169.122:56372). Apr 13 20:22:17.800795 sshd[5136]: Accepted publickey for core from 50.85.169.122 port 56372 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:17.802585 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:17.808393 systemd-logind[2083]: New session 20 of user core. Apr 13 20:22:17.814841 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:22:19.787186 containerd[2119]: time="2026-04-13T20:22:19.786848410Z" level=info msg="StopContainer for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" with timeout 30 (s)" Apr 13 20:22:19.806472 containerd[2119]: time="2026-04-13T20:22:19.806433482Z" level=info msg="Stop container \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" with signal terminated" Apr 13 20:22:19.921537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486-rootfs.mount: Deactivated successfully. Apr 13 20:22:19.928751 containerd[2119]: time="2026-04-13T20:22:19.928445894Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:22:19.930723 containerd[2119]: time="2026-04-13T20:22:19.929948545Z" level=info msg="StopContainer for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" with timeout 2 (s)" Apr 13 20:22:19.930723 containerd[2119]: time="2026-04-13T20:22:19.930301033Z" level=info msg="Stop container \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" with signal terminated" Apr 13 20:22:19.937647 containerd[2119]: time="2026-04-13T20:22:19.937454661Z" level=info msg="shim disconnected" id=f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486 namespace=k8s.io Apr 13 20:22:19.937647 containerd[2119]: time="2026-04-13T20:22:19.937578571Z" level=warning msg="cleaning up after shim disconnected" id=f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486 namespace=k8s.io Apr 13 20:22:19.937647 containerd[2119]: time="2026-04-13T20:22:19.937592743Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:19.944234 systemd-networkd[1658]: lxc_health: Link DOWN Apr 13 20:22:19.944243 systemd-networkd[1658]: lxc_health: Lost carrier Apr 13 20:22:19.980946 containerd[2119]: time="2026-04-13T20:22:19.980560618Z" level=info msg="StopContainer for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" returns successfully" Apr 13 20:22:19.986418 containerd[2119]: time="2026-04-13T20:22:19.986374249Z" level=info msg="StopPodSandbox for \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\"" Apr 13 20:22:19.988342 containerd[2119]: time="2026-04-13T20:22:19.986436841Z" level=info msg="Container to stop \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:22:19.991547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305-shm.mount: Deactivated successfully. Apr 13 20:22:20.013548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb-rootfs.mount: Deactivated successfully. Apr 13 20:22:20.041740 containerd[2119]: time="2026-04-13T20:22:20.038409654Z" level=info msg="shim disconnected" id=e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb namespace=k8s.io Apr 13 20:22:20.041740 containerd[2119]: time="2026-04-13T20:22:20.041363205Z" level=warning msg="cleaning up after shim disconnected" id=e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb namespace=k8s.io Apr 13 20:22:20.042985 containerd[2119]: time="2026-04-13T20:22:20.042693170Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:20.079980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305-rootfs.mount: Deactivated successfully. Apr 13 20:22:20.082332 containerd[2119]: time="2026-04-13T20:22:20.082251283Z" level=info msg="shim disconnected" id=ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305 namespace=k8s.io Apr 13 20:22:20.082332 containerd[2119]: time="2026-04-13T20:22:20.082329162Z" level=warning msg="cleaning up after shim disconnected" id=ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305 namespace=k8s.io Apr 13 20:22:20.082332 containerd[2119]: time="2026-04-13T20:22:20.082340516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:20.086025 containerd[2119]: time="2026-04-13T20:22:20.085676985Z" level=info msg="StopContainer for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" returns successfully" Apr 13 20:22:20.086629 containerd[2119]: time="2026-04-13T20:22:20.086597289Z" level=info msg="StopPodSandbox for \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\"" Apr 13 20:22:20.086725 containerd[2119]: time="2026-04-13T20:22:20.086650045Z" level=info msg="Container to stop \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:22:20.086725 containerd[2119]: time="2026-04-13T20:22:20.086666403Z" level=info msg="Container to stop \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:22:20.086725 containerd[2119]: time="2026-04-13T20:22:20.086680073Z" level=info msg="Container to stop \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:22:20.086725 containerd[2119]: time="2026-04-13T20:22:20.086694850Z" level=info msg="Container to stop \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:22:20.086725 containerd[2119]: time="2026-04-13T20:22:20.086709304Z" level=info msg="Container to stop \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 13 20:22:20.116441 containerd[2119]: time="2026-04-13T20:22:20.114696580Z" level=info msg="TearDown network for sandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" successfully" Apr 13 20:22:20.116441 containerd[2119]: time="2026-04-13T20:22:20.114735424Z" level=info msg="StopPodSandbox for \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" returns successfully" Apr 13 20:22:20.145853 containerd[2119]: time="2026-04-13T20:22:20.145788505Z" level=info msg="shim disconnected" id=f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e namespace=k8s.io Apr 13 20:22:20.147020 containerd[2119]: time="2026-04-13T20:22:20.146987942Z" level=warning msg="cleaning up after shim disconnected" id=f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e namespace=k8s.io Apr 13 20:22:20.147166 containerd[2119]: time="2026-04-13T20:22:20.147149124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:20.167523 containerd[2119]: time="2026-04-13T20:22:20.166945263Z" level=info msg="TearDown network for sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" successfully" Apr 13 20:22:20.167523 containerd[2119]: time="2026-04-13T20:22:20.166986627Z" level=info msg="StopPodSandbox for \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" returns successfully" Apr 13 20:22:20.169512 kubelet[3399]: I0413 20:22:20.169470 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6625277c-f079-4caf-8eaf-c934dce6d71d-cilium-config-path\") pod \"6625277c-f079-4caf-8eaf-c934dce6d71d\" (UID: \"6625277c-f079-4caf-8eaf-c934dce6d71d\") " Apr 13 20:22:20.170175 kubelet[3399]: I0413 20:22:20.170089 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b862b\" (UniqueName: \"kubernetes.io/projected/6625277c-f079-4caf-8eaf-c934dce6d71d-kube-api-access-b862b\") pod \"6625277c-f079-4caf-8eaf-c934dce6d71d\" (UID: \"6625277c-f079-4caf-8eaf-c934dce6d71d\") " Apr 13 20:22:20.182717 kubelet[3399]: I0413 20:22:20.181676 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6625277c-f079-4caf-8eaf-c934dce6d71d-kube-api-access-b862b" (OuterVolumeSpecName: "kube-api-access-b862b") pod "6625277c-f079-4caf-8eaf-c934dce6d71d" (UID: "6625277c-f079-4caf-8eaf-c934dce6d71d"). InnerVolumeSpecName "kube-api-access-b862b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:22:20.184157 kubelet[3399]: I0413 20:22:20.181337 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6625277c-f079-4caf-8eaf-c934dce6d71d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6625277c-f079-4caf-8eaf-c934dce6d71d" (UID: "6625277c-f079-4caf-8eaf-c934dce6d71d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:22:20.271983 kubelet[3399]: I0413 20:22:20.271276 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-cgroup\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.271983 kubelet[3399]: I0413 20:22:20.271325 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cni-path\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.271983 kubelet[3399]: I0413 20:22:20.271353 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hostproc\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.271983 kubelet[3399]: I0413 20:22:20.271377 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-kernel\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.271983 kubelet[3399]: I0413 20:22:20.271404 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hubble-tls\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.271983 kubelet[3399]: I0413 20:22:20.271423 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-bpf-maps\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272382 kubelet[3399]: I0413 20:22:20.271443 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-etc-cni-netd\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272382 kubelet[3399]: I0413 20:22:20.271462 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-xtables-lock\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272382 kubelet[3399]: I0413 20:22:20.271484 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-net\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272382 kubelet[3399]: I0413 20:22:20.271509 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-clustermesh-secrets\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272382 kubelet[3399]: I0413 20:22:20.271529 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-lib-modules\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272382 kubelet[3399]: I0413 20:22:20.271539 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.272809 kubelet[3399]: I0413 20:22:20.271558 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-config-path\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272809 kubelet[3399]: I0413 20:22:20.271583 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flfpp\" (UniqueName: \"kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-kube-api-access-flfpp\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272809 kubelet[3399]: I0413 20:22:20.271605 3399 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-run\") pod \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\" (UID: \"c63cf53e-22be-4e04-ba63-11d1ae9dc6e5\") " Apr 13 20:22:20.272809 kubelet[3399]: I0413 20:22:20.271654 3399 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6625277c-f079-4caf-8eaf-c934dce6d71d-cilium-config-path\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.272809 kubelet[3399]: I0413 20:22:20.271668 3399 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b862b\" (UniqueName: \"kubernetes.io/projected/6625277c-f079-4caf-8eaf-c934dce6d71d-kube-api-access-b862b\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.272809 kubelet[3399]: I0413 20:22:20.271684 3399 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-kernel\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.282746 kubelet[3399]: I0413 20:22:20.271583 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.282746 kubelet[3399]: I0413 20:22:20.271601 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.282746 kubelet[3399]: I0413 20:22:20.271616 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.282746 kubelet[3399]: I0413 20:22:20.271631 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.282746 kubelet[3399]: I0413 20:22:20.271708 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.283142 kubelet[3399]: I0413 20:22:20.276283 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:22:20.283142 kubelet[3399]: I0413 20:22:20.276283 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:22:20.283142 kubelet[3399]: I0413 20:22:20.276320 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.283142 kubelet[3399]: I0413 20:22:20.276320 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.283142 kubelet[3399]: I0413 20:22:20.276336 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.283395 kubelet[3399]: I0413 20:22:20.276351 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 13 20:22:20.283395 kubelet[3399]: I0413 20:22:20.279148 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:22:20.283395 kubelet[3399]: I0413 20:22:20.282693 3399 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-kube-api-access-flfpp" (OuterVolumeSpecName: "kube-api-access-flfpp") pod "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" (UID: "c63cf53e-22be-4e04-ba63-11d1ae9dc6e5"). InnerVolumeSpecName "kube-api-access-flfpp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:22:20.372547 kubelet[3399]: I0413 20:22:20.372493 3399 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-host-proc-sys-net\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372547 kubelet[3399]: I0413 20:22:20.372545 3399 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-clustermesh-secrets\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372547 kubelet[3399]: I0413 20:22:20.372558 3399 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-lib-modules\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372573 3399 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-config-path\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372586 3399 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-flfpp\" (UniqueName: \"kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-kube-api-access-flfpp\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372598 3399 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-run\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372610 3399 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cilium-cgroup\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372623 3399 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-cni-path\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372804 3399 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hostproc\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372816 3399 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-hubble-tls\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.372920 kubelet[3399]: I0413 20:22:20.372828 3399 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-bpf-maps\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.373181 kubelet[3399]: I0413 20:22:20.372841 3399 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-etc-cni-netd\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.373181 kubelet[3399]: I0413 20:22:20.372854 3399 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5-xtables-lock\") on node \"ip-172-31-31-175\" DevicePath \"\"" Apr 13 20:22:20.881446 systemd[1]: var-lib-kubelet-pods-6625277c\x2df079\x2d4caf\x2d8eaf\x2dc934dce6d71d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db862b.mount: Deactivated successfully. Apr 13 20:22:20.882301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e-rootfs.mount: Deactivated successfully. Apr 13 20:22:20.882608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e-shm.mount: Deactivated successfully. Apr 13 20:22:20.883198 systemd[1]: var-lib-kubelet-pods-c63cf53e\x2d22be\x2d4e04\x2dba63\x2d11d1ae9dc6e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dflfpp.mount: Deactivated successfully. Apr 13 20:22:20.883866 systemd[1]: var-lib-kubelet-pods-c63cf53e\x2d22be\x2d4e04\x2dba63\x2d11d1ae9dc6e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 13 20:22:20.884163 systemd[1]: var-lib-kubelet-pods-c63cf53e\x2d22be\x2d4e04\x2dba63\x2d11d1ae9dc6e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 13 20:22:20.933922 kubelet[3399]: I0413 20:22:20.933885 3399 scope.go:117] "RemoveContainer" containerID="e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb" Apr 13 20:22:20.937371 containerd[2119]: time="2026-04-13T20:22:20.937334397Z" level=info msg="RemoveContainer for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\"" Apr 13 20:22:20.969537 containerd[2119]: time="2026-04-13T20:22:20.969332525Z" level=info msg="RemoveContainer for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" returns successfully" Apr 13 20:22:20.970593 kubelet[3399]: I0413 20:22:20.970550 3399 scope.go:117] "RemoveContainer" containerID="55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb" Apr 13 20:22:20.975546 containerd[2119]: time="2026-04-13T20:22:20.975506479Z" level=info msg="RemoveContainer for \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\"" Apr 13 20:22:20.986155 containerd[2119]: time="2026-04-13T20:22:20.983503189Z" level=info msg="RemoveContainer for \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\" returns successfully" Apr 13 20:22:20.986949 kubelet[3399]: I0413 20:22:20.986915 3399 scope.go:117] "RemoveContainer" containerID="a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e" Apr 13 20:22:20.989425 containerd[2119]: time="2026-04-13T20:22:20.989387073Z" level=info msg="RemoveContainer for \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\"" Apr 13 20:22:20.997359 containerd[2119]: time="2026-04-13T20:22:20.997316097Z" level=info msg="RemoveContainer for \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\" returns successfully" Apr 13 20:22:20.998195 kubelet[3399]: I0413 20:22:20.998156 3399 scope.go:117] "RemoveContainer" containerID="15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8" Apr 13 20:22:20.999765 containerd[2119]: time="2026-04-13T20:22:20.999730247Z" level=info msg="RemoveContainer for \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\"" Apr 13 20:22:21.008208 containerd[2119]: time="2026-04-13T20:22:21.008149625Z" level=info msg="RemoveContainer for \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\" returns successfully" Apr 13 20:22:21.008487 kubelet[3399]: I0413 20:22:21.008446 3399 scope.go:117] "RemoveContainer" containerID="9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1" Apr 13 20:22:21.011457 containerd[2119]: time="2026-04-13T20:22:21.011416696Z" level=info msg="RemoveContainer for \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\"" Apr 13 20:22:21.023452 containerd[2119]: time="2026-04-13T20:22:21.023410710Z" level=info msg="RemoveContainer for \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\" returns successfully" Apr 13 20:22:21.024043 kubelet[3399]: I0413 20:22:21.023991 3399 scope.go:117] "RemoveContainer" containerID="e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb" Apr 13 20:22:21.056610 containerd[2119]: time="2026-04-13T20:22:21.035966531Z" level=error msg="ContainerStatus for \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\": not found" Apr 13 20:22:21.061563 kubelet[3399]: E0413 20:22:21.061492 3399 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\": not found" containerID="e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb" Apr 13 20:22:21.077594 kubelet[3399]: I0413 20:22:21.061581 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb"} err="failed to get container status \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2d18289d3c0ae2de0eec4014c0f2c5e096bc18866aeedfc9c92c449c1fad1fb\": not found" Apr 13 20:22:21.077594 kubelet[3399]: I0413 20:22:21.077404 3399 scope.go:117] "RemoveContainer" containerID="55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb" Apr 13 20:22:21.078359 containerd[2119]: time="2026-04-13T20:22:21.078294481Z" level=error msg="ContainerStatus for \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\": not found" Apr 13 20:22:21.078559 kubelet[3399]: E0413 20:22:21.078511 3399 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\": not found" containerID="55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb" Apr 13 20:22:21.078640 kubelet[3399]: I0413 20:22:21.078567 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb"} err="failed to get container status \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\": rpc error: code = NotFound desc = an error occurred when try to find container \"55919b05f67b6e652568fc2f54475be127ddd3900cc59eee4dee693d91983abb\": not found" Apr 13 20:22:21.078640 kubelet[3399]: I0413 20:22:21.078597 3399 scope.go:117] "RemoveContainer" containerID="a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e" Apr 13 20:22:21.078980 containerd[2119]: time="2026-04-13T20:22:21.078937526Z" level=error msg="ContainerStatus for \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\": not found" Apr 13 20:22:21.079173 kubelet[3399]: E0413 20:22:21.079110 3399 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\": not found" containerID="a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e" Apr 13 20:22:21.079266 kubelet[3399]: I0413 20:22:21.079163 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e"} err="failed to get container status \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a77e0ed1cd0ae94d5d2812576c230acba733e5c4df81506125ead5574827a55e\": not found" Apr 13 20:22:21.079266 kubelet[3399]: I0413 20:22:21.079187 3399 scope.go:117] "RemoveContainer" containerID="15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8" Apr 13 20:22:21.079445 containerd[2119]: time="2026-04-13T20:22:21.079395236Z" level=error msg="ContainerStatus for \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\": not found" Apr 13 20:22:21.079969 containerd[2119]: time="2026-04-13T20:22:21.079864654Z" level=error msg="ContainerStatus for \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\": not found" Apr 13 20:22:21.080043 kubelet[3399]: E0413 20:22:21.079572 3399 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\": not found" containerID="15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8" Apr 13 20:22:21.080043 kubelet[3399]: I0413 20:22:21.079601 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8"} err="failed to get container status \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\": rpc error: code = NotFound desc = an error occurred when try to find container \"15dba41ffd0824e209120b02dbbcfebec993ba1a67e3a55705f940a714f06be8\": not found" Apr 13 20:22:21.080043 kubelet[3399]: I0413 20:22:21.079621 3399 scope.go:117] "RemoveContainer" containerID="9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1" Apr 13 20:22:21.080293 kubelet[3399]: E0413 20:22:21.080234 3399 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\": not found" containerID="9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1" Apr 13 20:22:21.080364 kubelet[3399]: I0413 20:22:21.080293 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1"} err="failed to get container status \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\": rpc error: code = NotFound desc = an error occurred when try to find container \"9494be53414945d6c7b428ca200a96e1f4f45106061546b404ed6923be69afd1\": not found" Apr 13 20:22:21.080364 kubelet[3399]: I0413 20:22:21.080314 3399 scope.go:117] "RemoveContainer" containerID="f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486" Apr 13 20:22:21.081969 containerd[2119]: time="2026-04-13T20:22:21.081687956Z" level=info msg="RemoveContainer for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\"" Apr 13 20:22:21.087531 containerd[2119]: time="2026-04-13T20:22:21.087478464Z" level=info msg="RemoveContainer for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" returns successfully" Apr 13 20:22:21.087794 kubelet[3399]: I0413 20:22:21.087765 3399 scope.go:117] "RemoveContainer" containerID="f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486" Apr 13 20:22:21.088267 containerd[2119]: time="2026-04-13T20:22:21.088132971Z" level=error msg="ContainerStatus for \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\": not found" Apr 13 20:22:21.088534 kubelet[3399]: E0413 20:22:21.088487 3399 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\": not found" containerID="f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486" Apr 13 20:22:21.088850 kubelet[3399]: I0413 20:22:21.088544 3399 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486"} err="failed to get container status \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2fc108aec2824929f7ad78be013e2e2c31f20837ed6687e3ab0b6b53e070486\": not found" Apr 13 20:22:21.824066 sshd[5136]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:21.829446 systemd-logind[2083]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:22:21.832107 systemd[1]: sshd@19-172.31.31.175:22-50.85.169.122:56372.service: Deactivated successfully. Apr 13 20:22:21.836535 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:22:21.838042 systemd-logind[2083]: Removed session 20. Apr 13 20:22:22.002300 systemd[1]: Started sshd@20-172.31.31.175:22-50.85.169.122:46906.service - OpenSSH per-connection server daemon (50.85.169.122:46906). Apr 13 20:22:22.273969 kubelet[3399]: I0413 20:22:22.273376 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6625277c-f079-4caf-8eaf-c934dce6d71d" path="/var/lib/kubelet/pods/6625277c-f079-4caf-8eaf-c934dce6d71d/volumes" Apr 13 20:22:22.274580 kubelet[3399]: I0413 20:22:22.274359 3399 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63cf53e-22be-4e04-ba63-11d1ae9dc6e5" path="/var/lib/kubelet/pods/c63cf53e-22be-4e04-ba63-11d1ae9dc6e5/volumes" Apr 13 20:22:22.464449 ntpd[2067]: Deleting interface #10 lxc_health, fe80::e884:56ff:fee2:e59c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs Apr 13 20:22:22.466769 ntpd[2067]: 13 Apr 20:22:22 ntpd[2067]: Deleting interface #10 lxc_health, fe80::e884:56ff:fee2:e59c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs Apr 13 20:22:23.043855 sshd[5301]: Accepted publickey for core from 50.85.169.122 port 46906 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:23.045570 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:23.051886 systemd-logind[2083]: New session 21 of user core. Apr 13 20:22:23.061832 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:22:23.526704 kubelet[3399]: E0413 20:22:23.526516 3399 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 13 20:22:24.307274 kubelet[3399]: I0413 20:22:24.306853 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-lib-modules\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307274 kubelet[3399]: I0413 20:22:24.306912 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2686da6b-6e24-4aeb-b6d6-57db5921d880-cilium-config-path\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307274 kubelet[3399]: I0413 20:22:24.306943 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2686da6b-6e24-4aeb-b6d6-57db5921d880-hubble-tls\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307274 kubelet[3399]: I0413 20:22:24.306959 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-bpf-maps\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307274 kubelet[3399]: I0413 20:22:24.306987 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-hostproc\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307274 kubelet[3399]: I0413 20:22:24.307017 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-cilium-cgroup\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307683 kubelet[3399]: I0413 20:22:24.307034 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2686da6b-6e24-4aeb-b6d6-57db5921d880-clustermesh-secrets\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307683 kubelet[3399]: I0413 20:22:24.307050 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-cni-path\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307683 kubelet[3399]: I0413 20:22:24.307081 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-host-proc-sys-net\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307683 kubelet[3399]: I0413 20:22:24.307139 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-cilium-run\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307683 kubelet[3399]: I0413 20:22:24.307172 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-etc-cni-netd\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307683 kubelet[3399]: I0413 20:22:24.307200 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2686da6b-6e24-4aeb-b6d6-57db5921d880-cilium-ipsec-secrets\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307926 kubelet[3399]: I0413 20:22:24.307217 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv45n\" (UniqueName: \"kubernetes.io/projected/2686da6b-6e24-4aeb-b6d6-57db5921d880-kube-api-access-gv45n\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307926 kubelet[3399]: I0413 20:22:24.307245 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-xtables-lock\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.307926 kubelet[3399]: I0413 20:22:24.307279 3399 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2686da6b-6e24-4aeb-b6d6-57db5921d880-host-proc-sys-kernel\") pod \"cilium-jvhdn\" (UID: \"2686da6b-6e24-4aeb-b6d6-57db5921d880\") " pod="kube-system/cilium-jvhdn" Apr 13 20:22:24.361300 sshd[5301]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:24.368524 systemd-logind[2083]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:22:24.370205 systemd[1]: sshd@20-172.31.31.175:22-50.85.169.122:46906.service: Deactivated successfully. Apr 13 20:22:24.373906 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:22:24.375740 systemd-logind[2083]: Removed session 21. Apr 13 20:22:24.494858 containerd[2119]: time="2026-04-13T20:22:24.494802802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvhdn,Uid:2686da6b-6e24-4aeb-b6d6-57db5921d880,Namespace:kube-system,Attempt:0,}" Apr 13 20:22:24.535982 containerd[2119]: time="2026-04-13T20:22:24.535748293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:22:24.536259 systemd[1]: Started sshd@21-172.31.31.175:22-50.85.169.122:46914.service - OpenSSH per-connection server daemon (50.85.169.122:46914). Apr 13 20:22:24.538348 containerd[2119]: time="2026-04-13T20:22:24.536279730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:22:24.538348 containerd[2119]: time="2026-04-13T20:22:24.537174756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:22:24.538348 containerd[2119]: time="2026-04-13T20:22:24.537317349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:22:24.594013 containerd[2119]: time="2026-04-13T20:22:24.593811061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvhdn,Uid:2686da6b-6e24-4aeb-b6d6-57db5921d880,Namespace:kube-system,Attempt:0,} returns sandbox id \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\"" Apr 13 20:22:24.603888 containerd[2119]: time="2026-04-13T20:22:24.603653925Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 13 20:22:24.623908 containerd[2119]: time="2026-04-13T20:22:24.623749568Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2098b4a91b53a8f77dc9b81e9aa88fdf250e0f9d22ffdb73f384233a4520824f\"" Apr 13 20:22:24.626081 containerd[2119]: time="2026-04-13T20:22:24.625003980Z" level=info msg="StartContainer for \"2098b4a91b53a8f77dc9b81e9aa88fdf250e0f9d22ffdb73f384233a4520824f\"" Apr 13 20:22:24.706032 containerd[2119]: time="2026-04-13T20:22:24.705958624Z" level=info msg="StartContainer for \"2098b4a91b53a8f77dc9b81e9aa88fdf250e0f9d22ffdb73f384233a4520824f\" returns successfully" Apr 13 20:22:24.822296 containerd[2119]: time="2026-04-13T20:22:24.822197485Z" level=info msg="shim disconnected" id=2098b4a91b53a8f77dc9b81e9aa88fdf250e0f9d22ffdb73f384233a4520824f namespace=k8s.io Apr 13 20:22:24.822296 containerd[2119]: time="2026-04-13T20:22:24.822288596Z" level=warning msg="cleaning up after shim disconnected" id=2098b4a91b53a8f77dc9b81e9aa88fdf250e0f9d22ffdb73f384233a4520824f namespace=k8s.io Apr 13 20:22:24.822296 containerd[2119]: time="2026-04-13T20:22:24.822302639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:24.839387 containerd[2119]: time="2026-04-13T20:22:24.839325491Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:22:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:22:24.957742 containerd[2119]: time="2026-04-13T20:22:24.957447417Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 13 20:22:24.982068 containerd[2119]: time="2026-04-13T20:22:24.982013303Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb8ac17a880cff4dfe521a0686bd0b569b248aeb6b44072e2dc46603e6008fa5\"" Apr 13 20:22:24.982784 containerd[2119]: time="2026-04-13T20:22:24.982745252Z" level=info msg="StartContainer for \"fb8ac17a880cff4dfe521a0686bd0b569b248aeb6b44072e2dc46603e6008fa5\"" Apr 13 20:22:25.054654 containerd[2119]: time="2026-04-13T20:22:25.054597550Z" level=info msg="StartContainer for \"fb8ac17a880cff4dfe521a0686bd0b569b248aeb6b44072e2dc46603e6008fa5\" returns successfully" Apr 13 20:22:25.129454 containerd[2119]: time="2026-04-13T20:22:25.129373508Z" level=info msg="shim disconnected" id=fb8ac17a880cff4dfe521a0686bd0b569b248aeb6b44072e2dc46603e6008fa5 namespace=k8s.io Apr 13 20:22:25.129454 containerd[2119]: time="2026-04-13T20:22:25.129452958Z" level=warning msg="cleaning up after shim disconnected" id=fb8ac17a880cff4dfe521a0686bd0b569b248aeb6b44072e2dc46603e6008fa5 namespace=k8s.io Apr 13 20:22:25.129752 containerd[2119]: time="2026-04-13T20:22:25.129464853Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:25.534412 sshd[5333]: Accepted publickey for core from 50.85.169.122 port 46914 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:25.535105 sshd[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:25.541669 systemd-logind[2083]: New session 22 of user core. Apr 13 20:22:25.548572 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:22:25.969398 containerd[2119]: time="2026-04-13T20:22:25.969341290Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 13 20:22:25.996830 containerd[2119]: time="2026-04-13T20:22:25.996776114Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed\"" Apr 13 20:22:25.998267 containerd[2119]: time="2026-04-13T20:22:25.997550599Z" level=info msg="StartContainer for \"f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed\"" Apr 13 20:22:26.094319 containerd[2119]: time="2026-04-13T20:22:26.094254853Z" level=info msg="StartContainer for \"f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed\" returns successfully" Apr 13 20:22:26.145501 containerd[2119]: time="2026-04-13T20:22:26.145411739Z" level=info msg="shim disconnected" id=f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed namespace=k8s.io Apr 13 20:22:26.145501 containerd[2119]: time="2026-04-13T20:22:26.145499500Z" level=warning msg="cleaning up after shim disconnected" id=f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed namespace=k8s.io Apr 13 20:22:26.145873 containerd[2119]: time="2026-04-13T20:22:26.145511233Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:26.219625 sshd[5333]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:26.225886 systemd[1]: sshd@21-172.31.31.175:22-50.85.169.122:46914.service: Deactivated successfully. Apr 13 20:22:26.230168 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:22:26.231874 systemd-logind[2083]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:22:26.233670 systemd-logind[2083]: Removed session 22. Apr 13 20:22:26.380514 systemd[1]: Started sshd@22-172.31.31.175:22-50.85.169.122:46930.service - OpenSSH per-connection server daemon (50.85.169.122:46930). Apr 13 20:22:26.417861 systemd[1]: run-containerd-runc-k8s.io-f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed-runc.8BPZBh.mount: Deactivated successfully. Apr 13 20:22:26.418096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f643b44479c3b2f5f1821fab1d0b9fe53ef8a889e4ec576ed959dbde9d0cebed-rootfs.mount: Deactivated successfully. Apr 13 20:22:26.965073 containerd[2119]: time="2026-04-13T20:22:26.964928882Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 13 20:22:26.998410 containerd[2119]: time="2026-04-13T20:22:26.996459704Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"52192ba3c6b3d2b54ddbe33777a60c6b3119d77ebee106bf89b8e7117867ff09\"" Apr 13 20:22:26.999365 containerd[2119]: time="2026-04-13T20:22:26.999321446Z" level=info msg="StartContainer for \"52192ba3c6b3d2b54ddbe33777a60c6b3119d77ebee106bf89b8e7117867ff09\"" Apr 13 20:22:27.088185 containerd[2119]: time="2026-04-13T20:22:27.088020799Z" level=info msg="StartContainer for \"52192ba3c6b3d2b54ddbe33777a60c6b3119d77ebee106bf89b8e7117867ff09\" returns successfully" Apr 13 20:22:27.126379 containerd[2119]: time="2026-04-13T20:22:27.126296211Z" level=info msg="shim disconnected" id=52192ba3c6b3d2b54ddbe33777a60c6b3119d77ebee106bf89b8e7117867ff09 namespace=k8s.io Apr 13 20:22:27.126379 containerd[2119]: time="2026-04-13T20:22:27.126374105Z" level=warning msg="cleaning up after shim disconnected" id=52192ba3c6b3d2b54ddbe33777a60c6b3119d77ebee106bf89b8e7117867ff09 namespace=k8s.io Apr 13 20:22:27.126379 containerd[2119]: time="2026-04-13T20:22:27.126385106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:27.342897 sshd[5553]: Accepted publickey for core from 50.85.169.122 port 46930 ssh2: RSA SHA256:z/+dP68XwS9O5xBqTY4V8/RyAnq5F+RWUI36qOQ3Oa4 Apr 13 20:22:27.343641 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:22:27.348871 systemd-logind[2083]: New session 23 of user core. Apr 13 20:22:27.357580 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 20:22:27.417853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52192ba3c6b3d2b54ddbe33777a60c6b3119d77ebee106bf89b8e7117867ff09-rootfs.mount: Deactivated successfully. Apr 13 20:22:27.970549 containerd[2119]: time="2026-04-13T20:22:27.970405392Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 13 20:22:27.997386 containerd[2119]: time="2026-04-13T20:22:27.997330361Z" level=info msg="CreateContainer within sandbox \"2adaafd26c395f335735001c3e625735345c616b6b7b485427593c2ad28bbdd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f44cd0d00140a765ad92e7118ada4f957c1c437efe3facc6f2e5b97c9ec227cc\"" Apr 13 20:22:27.998277 containerd[2119]: time="2026-04-13T20:22:27.998242615Z" level=info msg="StartContainer for \"f44cd0d00140a765ad92e7118ada4f957c1c437efe3facc6f2e5b97c9ec227cc\"" Apr 13 20:22:28.088100 containerd[2119]: time="2026-04-13T20:22:28.088043070Z" level=info msg="StartContainer for \"f44cd0d00140a765ad92e7118ada4f957c1c437efe3facc6f2e5b97c9ec227cc\" returns successfully" Apr 13 20:22:29.124168 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 13 20:22:32.330229 systemd-networkd[1658]: lxc_health: Link UP Apr 13 20:22:32.337087 systemd-networkd[1658]: lxc_health: Gained carrier Apr 13 20:22:32.346524 (udev-worker)[6175]: Network interface NamePolicy= disabled on kernel command line. Apr 13 20:22:32.581697 kubelet[3399]: I0413 20:22:32.580851 3399 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jvhdn" podStartSLOduration=8.580828907 podStartE2EDuration="8.580828907s" podCreationTimestamp="2026-04-13 20:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:22:29.004490644 +0000 UTC m=+111.253414459" watchObservedRunningTime="2026-04-13 20:22:32.580828907 +0000 UTC m=+114.829752725" Apr 13 20:22:33.468457 systemd-networkd[1658]: lxc_health: Gained IPv6LL Apr 13 20:22:36.464421 ntpd[2067]: Listen normally on 13 lxc_health [fe80::c01:a2ff:fe29:d09e%14]:123 Apr 13 20:22:36.465654 ntpd[2067]: 13 Apr 20:22:36 ntpd[2067]: Listen normally on 13 lxc_health [fe80::c01:a2ff:fe29:d09e%14]:123 Apr 13 20:22:38.195326 containerd[2119]: time="2026-04-13T20:22:38.195283592Z" level=info msg="StopPodSandbox for \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\"" Apr 13 20:22:38.198969 containerd[2119]: time="2026-04-13T20:22:38.196046357Z" level=info msg="TearDown network for sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" successfully" Apr 13 20:22:38.198969 containerd[2119]: time="2026-04-13T20:22:38.196072042Z" level=info msg="StopPodSandbox for \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" returns successfully" Apr 13 20:22:38.198969 containerd[2119]: time="2026-04-13T20:22:38.196901784Z" level=info msg="RemovePodSandbox for \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\"" Apr 13 20:22:38.200834 containerd[2119]: time="2026-04-13T20:22:38.200786905Z" level=info msg="Forcibly stopping sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\"" Apr 13 20:22:38.201160 containerd[2119]: time="2026-04-13T20:22:38.201137790Z" level=info msg="TearDown network for sandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" successfully" Apr 13 20:22:38.209696 containerd[2119]: time="2026-04-13T20:22:38.209405579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:22:38.209696 containerd[2119]: time="2026-04-13T20:22:38.209488160Z" level=info msg="RemovePodSandbox \"f2b3b388dacbd66b522c78aa0519c8704b036b3ab9036a03998a74d91d432f6e\" returns successfully" Apr 13 20:22:38.211184 containerd[2119]: time="2026-04-13T20:22:38.210234435Z" level=info msg="StopPodSandbox for \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\"" Apr 13 20:22:38.211184 containerd[2119]: time="2026-04-13T20:22:38.210332223Z" level=info msg="TearDown network for sandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" successfully" Apr 13 20:22:38.211184 containerd[2119]: time="2026-04-13T20:22:38.210364238Z" level=info msg="StopPodSandbox for \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" returns successfully" Apr 13 20:22:38.211184 containerd[2119]: time="2026-04-13T20:22:38.210822017Z" level=info msg="RemovePodSandbox for \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\"" Apr 13 20:22:38.211184 containerd[2119]: time="2026-04-13T20:22:38.210851938Z" level=info msg="Forcibly stopping sandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\"" Apr 13 20:22:38.211184 containerd[2119]: time="2026-04-13T20:22:38.210929024Z" level=info msg="TearDown network for sandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" successfully" Apr 13 20:22:38.217417 containerd[2119]: time="2026-04-13T20:22:38.217324156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:22:38.217537 containerd[2119]: time="2026-04-13T20:22:38.217449331Z" level=info msg="RemovePodSandbox \"ed28bebeba983a8cb81136228f3418d0eb9ab31627f7607fe3d48b49b0098305\" returns successfully" Apr 13 20:22:39.533396 sshd[5553]: pam_unix(sshd:session): session closed for user core Apr 13 20:22:39.537545 systemd[1]: sshd@22-172.31.31.175:22-50.85.169.122:46930.service: Deactivated successfully. Apr 13 20:22:39.544140 systemd-logind[2083]: Session 23 logged out. Waiting for processes to exit. Apr 13 20:22:39.544674 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 20:22:39.546357 systemd-logind[2083]: Removed session 23. Apr 13 20:22:55.201762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4-rootfs.mount: Deactivated successfully. Apr 13 20:22:55.221836 containerd[2119]: time="2026-04-13T20:22:55.221736298Z" level=info msg="shim disconnected" id=a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4 namespace=k8s.io Apr 13 20:22:55.221836 containerd[2119]: time="2026-04-13T20:22:55.221803811Z" level=warning msg="cleaning up after shim disconnected" id=a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4 namespace=k8s.io Apr 13 20:22:55.221836 containerd[2119]: time="2026-04-13T20:22:55.221817239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:22:55.242107 containerd[2119]: time="2026-04-13T20:22:55.242040000Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:22:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:22:56.071020 kubelet[3399]: I0413 20:22:56.070964 3399 scope.go:117] "RemoveContainer" containerID="a0c796c1aabb771cf76774312acaff580c7f999d75e47845cf2c16f05cf533b4" Apr 13 20:22:56.074311 containerd[2119]: time="2026-04-13T20:22:56.074263735Z" level=info msg="CreateContainer within sandbox \"1c7836bb76eae345199d21240f9874e5c595c3d9b59d15ccb5c16732501728fe\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 20:22:56.100520 containerd[2119]: time="2026-04-13T20:22:56.100412150Z" level=info msg="CreateContainer within sandbox \"1c7836bb76eae345199d21240f9874e5c595c3d9b59d15ccb5c16732501728fe\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ea1d4ede9c664cbae3f14a9edbbf2ff6199c98658abbd57b378c280140739a7a\"" Apr 13 20:22:56.101876 containerd[2119]: time="2026-04-13T20:22:56.101835036Z" level=info msg="StartContainer for \"ea1d4ede9c664cbae3f14a9edbbf2ff6199c98658abbd57b378c280140739a7a\"" Apr 13 20:22:56.198632 containerd[2119]: time="2026-04-13T20:22:56.198579876Z" level=info msg="StartContainer for \"ea1d4ede9c664cbae3f14a9edbbf2ff6199c98658abbd57b378c280140739a7a\" returns successfully" Apr 13 20:23:00.497545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd-rootfs.mount: Deactivated successfully. Apr 13 20:23:00.511988 containerd[2119]: time="2026-04-13T20:23:00.511922876Z" level=info msg="shim disconnected" id=5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd namespace=k8s.io Apr 13 20:23:00.511988 containerd[2119]: time="2026-04-13T20:23:00.511986925Z" level=warning msg="cleaning up after shim disconnected" id=5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd namespace=k8s.io Apr 13 20:23:00.513073 containerd[2119]: time="2026-04-13T20:23:00.511998380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:23:01.105422 kubelet[3399]: I0413 20:23:01.105380 3399 scope.go:117] "RemoveContainer" containerID="5c98982dc72da37ebf5b25fabb5a2b42da34795c72ccc6a23997ccdfdba6a4cd" Apr 13 20:23:01.119288 containerd[2119]: time="2026-04-13T20:23:01.119089174Z" level=info msg="CreateContainer within sandbox \"ca636419f93c0acf0efd71e12dd3b94b3c3a5fb2f0896c40c9c5d2bff8249959\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 20:23:01.214412 containerd[2119]: time="2026-04-13T20:23:01.214261244Z" level=info msg="CreateContainer within sandbox \"ca636419f93c0acf0efd71e12dd3b94b3c3a5fb2f0896c40c9c5d2bff8249959\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b919c135c7082a980ee1b45092fb7439a37446c4de981c365dd1d0c97e48af6c\"" Apr 13 20:23:01.221538 containerd[2119]: time="2026-04-13T20:23:01.221483911Z" level=info msg="StartContainer for \"b919c135c7082a980ee1b45092fb7439a37446c4de981c365dd1d0c97e48af6c\"" Apr 13 20:23:01.227315 kubelet[3399]: E0413 20:23:01.227232 3399 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-175?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 13 20:23:01.733989 containerd[2119]: time="2026-04-13T20:23:01.733926060Z" level=info msg="StartContainer for \"b919c135c7082a980ee1b45092fb7439a37446c4de981c365dd1d0c97e48af6c\" returns successfully"