Mar 7 01:14:19.961227 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:14:19.961267 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:19.961288 kernel: BIOS-provided physical RAM map: Mar 7 01:14:19.961300 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:14:19.961312 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 7 01:14:19.961324 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 7 01:14:19.961339 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 7 01:14:19.961353 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 7 01:14:19.961366 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 7 01:14:19.961381 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 7 01:14:19.961394 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 7 01:14:19.961407 kernel: NX (Execute Disable) protection: active Mar 7 01:14:19.961420 kernel: APIC: Static calls initialized Mar 7 01:14:19.961432 kernel: efi: EFI v2.7 by EDK II Mar 7 01:14:19.961448 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 7 01:14:19.961466 kernel: SMBIOS 2.7 present. Mar 7 01:14:19.961480 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 7 01:14:19.961494 kernel: Hypervisor detected: KVM Mar 7 01:14:19.961508 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:14:19.961523 kernel: kvm-clock: using sched offset of 3674190244 cycles Mar 7 01:14:19.961538 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:14:19.961553 kernel: tsc: Detected 2499.996 MHz processor Mar 7 01:14:19.961567 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:14:19.961594 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:14:19.961610 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 7 01:14:19.961628 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:14:19.961642 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:14:19.961656 kernel: Using GB pages for direct mapping Mar 7 01:14:19.961671 kernel: Secure boot disabled Mar 7 01:14:19.961685 kernel: ACPI: Early table checksum verification disabled Mar 7 01:14:19.961699 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 7 01:14:19.961714 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 01:14:19.961728 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 01:14:19.961742 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 01:14:19.961760 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 7 01:14:19.961774 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 7 01:14:19.961788 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 01:14:19.961802 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 01:14:19.961816 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 7 01:14:19.961831 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 7 01:14:19.961851 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:14:19.961870 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:14:19.961885 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 7 01:14:19.961901 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 7 01:14:19.961916 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 7 01:14:19.961931 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 7 01:14:19.961947 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 7 01:14:19.961965 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 7 01:14:19.961980 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 7 01:14:19.961995 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 7 01:14:19.962010 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 7 01:14:19.962026 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 7 01:14:19.962041 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 7 01:14:19.962056 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 7 01:14:19.962071 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:14:19.962086 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:14:19.962102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 7 01:14:19.962120 kernel: NUMA: Initialized distance table, cnt=1 Mar 7 01:14:19.962135 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 7 01:14:19.962150 kernel: Zone ranges: Mar 7 01:14:19.962166 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:14:19.962181 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 7 01:14:19.962195 kernel: Normal empty Mar 7 01:14:19.962211 kernel: Movable zone start for each node Mar 7 01:14:19.962226 kernel: Early memory node ranges Mar 7 01:14:19.962241 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:14:19.962259 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 7 01:14:19.962275 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 7 01:14:19.962290 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 7 01:14:19.962305 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:14:19.962320 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:14:19.962336 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:14:19.962352 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 7 01:14:19.962367 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:14:19.962382 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:14:19.962400 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 7 01:14:19.962415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:14:19.962430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:14:19.962445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:14:19.962461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:14:19.962476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:14:19.962491 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:14:19.962506 kernel: TSC deadline timer available Mar 7 01:14:19.962522 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:14:19.962540 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:14:19.962556 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 7 01:14:19.962571 kernel: Booting paravirtualized kernel on KVM Mar 7 01:14:19.964433 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:14:19.964455 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:14:19.964469 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:14:19.964485 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:14:19.964501 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:14:19.964516 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:14:19.964531 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:14:19.964556 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:19.964571 kernel: random: crng init done Mar 7 01:14:19.966460 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:14:19.966489 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:14:19.966507 kernel: Fallback order for Node 0: 0 Mar 7 01:14:19.966524 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 7 01:14:19.966541 kernel: Policy zone: DMA32 Mar 7 01:14:19.966558 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:14:19.966596 kernel: Memory: 1874624K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162920K reserved, 0K cma-reserved) Mar 7 01:14:19.966611 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:14:19.966625 kernel: Kernel/User page tables isolation: enabled Mar 7 01:14:19.966639 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:14:19.966655 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:14:19.966670 kernel: Dynamic Preempt: voluntary Mar 7 01:14:19.966684 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:14:19.966700 kernel: rcu: RCU event tracing is enabled. Mar 7 01:14:19.966720 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:14:19.966736 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:14:19.966751 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:14:19.966766 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:14:19.966781 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:14:19.966796 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:14:19.966812 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:14:19.966828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:14:19.966859 kernel: Console: colour dummy device 80x25 Mar 7 01:14:19.966876 kernel: printk: console [tty0] enabled Mar 7 01:14:19.966892 kernel: printk: console [ttyS0] enabled Mar 7 01:14:19.966908 kernel: ACPI: Core revision 20230628 Mar 7 01:14:19.966925 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 7 01:14:19.966945 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:14:19.966961 kernel: x2apic enabled Mar 7 01:14:19.966977 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:14:19.966994 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 7 01:14:19.967013 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 7 01:14:19.967030 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:14:19.967047 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:14:19.967063 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:14:19.967078 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:14:19.967093 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:14:19.967109 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:14:19.967125 kernel: RETBleed: Vulnerable Mar 7 01:14:19.967141 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:14:19.967158 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:14:19.967174 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:14:19.967193 kernel: GDS: Unknown: Dependent on hypervisor status Mar 7 01:14:19.967209 kernel: active return thunk: its_return_thunk Mar 7 01:14:19.967225 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:14:19.967249 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:14:19.967264 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:14:19.967278 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:14:19.967291 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 7 01:14:19.967305 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 7 01:14:19.967319 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:14:19.967335 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:14:19.967351 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:14:19.967371 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:14:19.967387 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:14:19.967404 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 7 01:14:19.967420 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 7 01:14:19.967436 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 7 01:14:19.967452 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 7 01:14:19.967469 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 7 01:14:19.967485 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 7 01:14:19.967502 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 7 01:14:19.967518 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:14:19.967534 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:14:19.967553 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:14:19.967569 kernel: landlock: Up and running. Mar 7 01:14:19.967598 kernel: SELinux: Initializing. Mar 7 01:14:19.967624 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:14:19.967640 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:14:19.967657 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:14:19.967675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:19.967693 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:19.967710 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:14:19.967727 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:14:19.967749 kernel: signal: max sigframe size: 3632 Mar 7 01:14:19.967766 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:14:19.967783 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:14:19.967798 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:14:19.967814 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:14:19.967830 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:14:19.967845 kernel: .... node #0, CPUs: #1 Mar 7 01:14:19.967862 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:14:19.967878 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:14:19.967897 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:14:19.967912 kernel: smpboot: Max logical packages: 1 Mar 7 01:14:19.967927 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 7 01:14:19.967943 kernel: devtmpfs: initialized Mar 7 01:14:19.967958 kernel: x86/mm: Memory block size: 128MB Mar 7 01:14:19.967975 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 7 01:14:19.967992 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:14:19.968008 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:14:19.968023 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:14:19.968044 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:14:19.968061 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:14:19.968079 kernel: audit: type=2000 audit(1772846059.976:1): state=initialized audit_enabled=0 res=1 Mar 7 01:14:19.968096 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:14:19.968112 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:14:19.968129 kernel: cpuidle: using governor menu Mar 7 01:14:19.968145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:14:19.968159 kernel: dca service started, version 1.12.1 Mar 7 01:14:19.968175 kernel: PCI: Using configuration type 1 for base access Mar 7 01:14:19.968193 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:14:19.968208 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:14:19.968222 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:14:19.968237 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:14:19.968252 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:14:19.968266 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:14:19.968281 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:14:19.968296 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:14:19.968310 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:14:19.968328 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:14:19.968343 kernel: ACPI: Interpreter enabled Mar 7 01:14:19.968357 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:14:19.968372 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:14:19.968387 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:14:19.968403 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:14:19.968417 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 7 01:14:19.968432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:14:19.969388 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:14:19.969565 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:14:19.970771 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:14:19.970797 kernel: acpiphp: Slot [3] registered Mar 7 01:14:19.970813 kernel: acpiphp: Slot [4] registered Mar 7 01:14:19.970829 kernel: acpiphp: Slot [5] registered Mar 7 01:14:19.970845 kernel: acpiphp: Slot [6] registered Mar 7 01:14:19.970861 kernel: acpiphp: Slot [7] registered Mar 7 01:14:19.970883 kernel: acpiphp: Slot [8] registered Mar 7 01:14:19.970898 kernel: acpiphp: Slot [9] registered Mar 7 01:14:19.970914 kernel: acpiphp: Slot [10] registered Mar 7 01:14:19.970931 kernel: acpiphp: Slot [11] registered Mar 7 01:14:19.970946 kernel: acpiphp: Slot [12] registered Mar 7 01:14:19.970962 kernel: acpiphp: Slot [13] registered Mar 7 01:14:19.970977 kernel: acpiphp: Slot [14] registered Mar 7 01:14:19.970993 kernel: acpiphp: Slot [15] registered Mar 7 01:14:19.971008 kernel: acpiphp: Slot [16] registered Mar 7 01:14:19.971029 kernel: acpiphp: Slot [17] registered Mar 7 01:14:19.971044 kernel: acpiphp: Slot [18] registered Mar 7 01:14:19.971059 kernel: acpiphp: Slot [19] registered Mar 7 01:14:19.971074 kernel: acpiphp: Slot [20] registered Mar 7 01:14:19.971088 kernel: acpiphp: Slot [21] registered Mar 7 01:14:19.971102 kernel: acpiphp: Slot [22] registered Mar 7 01:14:19.971118 kernel: acpiphp: Slot [23] registered Mar 7 01:14:19.971133 kernel: acpiphp: Slot [24] registered Mar 7 01:14:19.971149 kernel: acpiphp: Slot [25] registered Mar 7 01:14:19.971167 kernel: acpiphp: Slot [26] registered Mar 7 01:14:19.971188 kernel: acpiphp: Slot [27] registered Mar 7 01:14:19.971203 kernel: acpiphp: Slot [28] registered Mar 7 01:14:19.971217 kernel: acpiphp: Slot [29] registered Mar 7 01:14:19.971243 kernel: acpiphp: Slot [30] registered Mar 7 01:14:19.971259 kernel: acpiphp: Slot [31] registered Mar 7 01:14:19.971272 kernel: PCI host bridge to bus 0000:00 Mar 7 01:14:19.971430 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:14:19.971557 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:14:19.971707 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:14:19.971835 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 7 01:14:19.971960 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 7 01:14:19.972085 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:14:19.972245 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:14:19.972397 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 7 01:14:19.972555 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 7 01:14:19.975932 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:14:19.976096 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 7 01:14:19.976253 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 7 01:14:19.976400 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 7 01:14:19.976537 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 7 01:14:19.976694 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 7 01:14:19.976844 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 7 01:14:19.976996 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 7 01:14:19.977139 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 7 01:14:19.977279 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:14:19.977419 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 7 01:14:19.977558 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:14:19.981426 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 01:14:19.981629 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 7 01:14:19.981780 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 01:14:19.981918 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 7 01:14:19.981938 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:14:19.981955 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:14:19.981971 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:14:19.981987 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:14:19.982003 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:14:19.982024 kernel: iommu: Default domain type: Translated Mar 7 01:14:19.982040 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:14:19.982055 kernel: efivars: Registered efivars operations Mar 7 01:14:19.982071 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:14:19.982087 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:14:19.982102 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 7 01:14:19.982117 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 7 01:14:19.982252 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 7 01:14:19.982389 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 7 01:14:19.982528 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:14:19.982547 kernel: vgaarb: loaded Mar 7 01:14:19.982563 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 7 01:14:19.982579 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 7 01:14:19.982625 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:14:19.982640 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:14:19.982656 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:14:19.982671 kernel: pnp: PnP ACPI init Mar 7 01:14:19.982687 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:14:19.982707 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:14:19.982722 kernel: NET: Registered PF_INET protocol family Mar 7 01:14:19.982738 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:14:19.982754 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 7 01:14:19.982769 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:14:19.982786 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:14:19.982801 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 7 01:14:19.982817 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 7 01:14:19.982836 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:14:19.982852 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:14:19.982867 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:14:19.982883 kernel: NET: Registered PF_XDP protocol family Mar 7 01:14:19.983014 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:14:19.983137 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:14:19.983266 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:14:19.983388 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 7 01:14:19.983508 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 7 01:14:19.983677 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:14:19.983698 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:14:19.983714 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:14:19.983730 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 7 01:14:19.983746 kernel: clocksource: Switched to clocksource tsc Mar 7 01:14:19.983762 kernel: Initialise system trusted keyrings Mar 7 01:14:19.983778 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 7 01:14:19.983793 kernel: Key type asymmetric registered Mar 7 01:14:19.983812 kernel: Asymmetric key parser 'x509' registered Mar 7 01:14:19.983828 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:14:19.983844 kernel: io scheduler mq-deadline registered Mar 7 01:14:19.983859 kernel: io scheduler kyber registered Mar 7 01:14:19.983874 kernel: io scheduler bfq registered Mar 7 01:14:19.983890 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:14:19.983905 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:14:19.983921 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:14:19.983937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:14:19.983955 kernel: i8042: Warning: Keylock active Mar 7 01:14:19.983970 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:14:19.983986 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:14:19.984131 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:14:19.984259 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:14:19.984385 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:14:19 UTC (1772846059) Mar 7 01:14:19.984511 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:14:19.984530 kernel: intel_pstate: CPU model not supported Mar 7 01:14:19.984549 kernel: efifb: probing for efifb Mar 7 01:14:19.984565 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 7 01:14:19.984591 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 7 01:14:19.984608 kernel: efifb: scrolling: redraw Mar 7 01:14:19.984623 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:14:19.984639 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:14:19.984654 kernel: fb0: EFI VGA frame buffer device Mar 7 01:14:19.984670 kernel: pstore: Using crash dump compression: deflate Mar 7 01:14:19.984686 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:14:19.984705 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:14:19.984720 kernel: Segment Routing with IPv6 Mar 7 01:14:19.984735 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:14:19.984751 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:14:19.984767 kernel: Key type dns_resolver registered Mar 7 01:14:19.984782 kernel: IPI shorthand broadcast: enabled Mar 7 01:14:19.984825 kernel: sched_clock: Marking stable (479003105, 128450221)->(678437028, -70983702) Mar 7 01:14:19.984845 kernel: registered taskstats version 1 Mar 7 01:14:19.984861 kernel: Loading compiled-in X.509 certificates Mar 7 01:14:19.984881 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:14:19.984897 kernel: Key type .fscrypt registered Mar 7 01:14:19.984913 kernel: Key type fscrypt-provisioning registered Mar 7 01:14:19.984928 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:14:19.984945 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:14:19.984961 kernel: ima: No architecture policies found Mar 7 01:14:19.984977 kernel: clk: Disabling unused clocks Mar 7 01:14:19.984993 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:14:19.985010 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:14:19.985029 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:14:19.985045 kernel: Run /init as init process Mar 7 01:14:19.985061 kernel: with arguments: Mar 7 01:14:19.985077 kernel: /init Mar 7 01:14:19.985093 kernel: with environment: Mar 7 01:14:19.985109 kernel: HOME=/ Mar 7 01:14:19.985125 kernel: TERM=linux Mar 7 01:14:19.985144 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:14:19.985167 systemd[1]: Detected virtualization amazon. Mar 7 01:14:19.985185 systemd[1]: Detected architecture x86-64. Mar 7 01:14:19.985201 systemd[1]: Running in initrd. Mar 7 01:14:19.985221 systemd[1]: No hostname configured, using default hostname. Mar 7 01:14:19.985238 systemd[1]: Hostname set to . Mar 7 01:14:19.985255 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:14:19.985272 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:14:19.985289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:19.985310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:19.985328 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:14:19.985345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:14:19.985363 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:14:19.985384 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:14:19.985407 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:14:19.985425 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:14:19.985443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:19.985460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:19.985477 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:14:19.985495 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:14:19.985512 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:14:19.985533 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:14:19.985550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:14:19.985567 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:14:19.987617 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:14:19.987643 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:14:19.987661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:19.987679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:19.987696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:19.987713 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:14:19.987737 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:14:19.987755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:14:19.987772 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:14:19.987787 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:14:19.987802 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:14:19.987821 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:14:19.987841 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:19.987896 systemd-journald[179]: Collecting audit messages is disabled. Mar 7 01:14:19.987944 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:14:19.987963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:19.987982 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:14:19.988006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:14:19.988026 systemd-journald[179]: Journal started Mar 7 01:14:19.988064 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2ac4c4f2b780664d09a3420382c237) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:14:19.954823 systemd-modules-load[180]: Inserted module 'overlay' Mar 7 01:14:19.993654 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:14:20.006968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:14:20.007043 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:14:20.009127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:20.012494 kernel: Bridge firewalling registered Mar 7 01:14:20.010787 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 7 01:14:20.014978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:20.020791 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:20.024802 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:14:20.028098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:14:20.030704 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:20.044794 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:14:20.059212 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:20.061050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:20.063436 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:20.069849 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:14:20.073773 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:14:20.076901 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:20.088500 dracut-cmdline[211]: dracut-dracut-053 Mar 7 01:14:20.094080 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:14:20.135993 systemd-resolved[213]: Positive Trust Anchors: Mar 7 01:14:20.136011 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:14:20.136073 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:14:20.144328 systemd-resolved[213]: Defaulting to hostname 'linux'. Mar 7 01:14:20.147604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:14:20.148313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:20.181624 kernel: SCSI subsystem initialized Mar 7 01:14:20.191612 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:14:20.202611 kernel: iscsi: registered transport (tcp) Mar 7 01:14:20.224776 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:14:20.224862 kernel: QLogic iSCSI HBA Driver Mar 7 01:14:20.263858 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:14:20.271822 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:14:20.297949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:14:20.298031 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:14:20.298053 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:14:20.341615 kernel: raid6: avx512x4 gen() 17910 MB/s Mar 7 01:14:20.359613 kernel: raid6: avx512x2 gen() 17797 MB/s Mar 7 01:14:20.377614 kernel: raid6: avx512x1 gen() 17712 MB/s Mar 7 01:14:20.395613 kernel: raid6: avx2x4 gen() 17548 MB/s Mar 7 01:14:20.413612 kernel: raid6: avx2x2 gen() 17578 MB/s Mar 7 01:14:20.431932 kernel: raid6: avx2x1 gen() 13681 MB/s Mar 7 01:14:20.431987 kernel: raid6: using algorithm avx512x4 gen() 17910 MB/s Mar 7 01:14:20.450802 kernel: raid6: .... xor() 7561 MB/s, rmw enabled Mar 7 01:14:20.450862 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:14:20.472620 kernel: xor: automatically using best checksumming function avx Mar 7 01:14:20.632617 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:14:20.643405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:14:20.649770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:20.664103 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 7 01:14:20.669337 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:20.678883 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:14:20.696754 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Mar 7 01:14:20.727337 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:14:20.735834 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:14:20.786532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:20.795888 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:14:20.819485 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:14:20.821868 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:14:20.824104 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:20.824626 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:14:20.831880 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:14:20.861290 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:14:20.891757 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:14:20.908618 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:14:20.908698 kernel: AES CTR mode by8 optimization enabled Mar 7 01:14:20.917195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:14:20.918466 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:20.920386 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:20.921631 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:20.921815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:20.922904 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:20.932259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:20.935745 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 01:14:20.936015 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 01:14:20.944631 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 7 01:14:20.952731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:20.958234 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:2b:13:14:53:bb Mar 7 01:14:20.952857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:20.958404 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:14:20.971605 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 01:14:20.970716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:20.976927 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 7 01:14:20.988611 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 01:14:20.998683 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:14:20.998752 kernel: GPT:9289727 != 33554431 Mar 7 01:14:20.998772 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:14:21.000919 kernel: GPT:9289727 != 33554431 Mar 7 01:14:21.000973 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:14:21.000992 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:14:21.004440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:21.015853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:14:21.034849 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:21.071284 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (450) Mar 7 01:14:21.098640 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (454) Mar 7 01:14:21.108542 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 01:14:21.168356 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 01:14:21.168998 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 01:14:21.176426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:14:21.183054 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 01:14:21.190802 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:14:21.198535 disk-uuid[631]: Primary Header is updated. Mar 7 01:14:21.198535 disk-uuid[631]: Secondary Entries is updated. Mar 7 01:14:21.198535 disk-uuid[631]: Secondary Header is updated. Mar 7 01:14:21.205185 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:14:21.210618 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:14:21.216617 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:14:22.219595 disk-uuid[632]: The operation has completed successfully. Mar 7 01:14:22.220607 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:14:22.368200 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:14:22.368354 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:14:22.392859 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:14:22.396654 sh[975]: Success Mar 7 01:14:22.419645 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:14:22.530017 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:14:22.537723 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:14:22.540791 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:14:22.587424 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:14:22.587504 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:22.587527 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:14:22.589843 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:14:22.592056 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:14:22.618619 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:14:22.633036 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:14:22.634336 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:14:22.640809 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:14:22.643798 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:14:22.668160 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:22.668232 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:22.668255 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:14:22.687603 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:14:22.702617 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:22.702150 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:14:22.711468 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:14:22.719900 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:14:22.756501 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:14:22.762789 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:14:22.807248 systemd-networkd[1169]: lo: Link UP Mar 7 01:14:22.808226 systemd-networkd[1169]: lo: Gained carrier Mar 7 01:14:22.811088 systemd-networkd[1169]: Enumeration completed Mar 7 01:14:22.811551 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:22.811556 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:14:22.812637 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:14:22.813392 systemd[1]: Reached target network.target - Network. Mar 7 01:14:22.825422 systemd-networkd[1169]: eth0: Link UP Mar 7 01:14:22.825430 systemd-networkd[1169]: eth0: Gained carrier Mar 7 01:14:22.825447 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:22.840853 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.16.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:14:22.882418 ignition[1118]: Ignition 2.19.0 Mar 7 01:14:22.883105 ignition[1118]: Stage: fetch-offline Mar 7 01:14:22.883413 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:22.883422 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:22.883756 ignition[1118]: Ignition finished successfully Mar 7 01:14:22.885968 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:14:22.891830 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:14:22.916434 ignition[1177]: Ignition 2.19.0 Mar 7 01:14:22.916448 ignition[1177]: Stage: fetch Mar 7 01:14:22.916945 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:22.916960 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:22.917079 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:22.930354 ignition[1177]: PUT result: OK Mar 7 01:14:22.932698 ignition[1177]: parsed url from cmdline: "" Mar 7 01:14:22.932710 ignition[1177]: no config URL provided Mar 7 01:14:22.932721 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:14:22.932736 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:14:22.932761 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:22.933438 ignition[1177]: PUT result: OK Mar 7 01:14:22.933503 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 01:14:22.934203 ignition[1177]: GET result: OK Mar 7 01:14:22.934302 ignition[1177]: parsing config with SHA512: fdada89b379d7db8682318f0feb5ba197d6cf24df28054365b58ff2ced59cec968886d842e3436a4bf95f6855b6be4b9b906624142b67835ff3c46757b5bed34 Mar 7 01:14:22.940426 unknown[1177]: fetched base config from "system" Mar 7 01:14:22.940442 unknown[1177]: fetched base config from "system" Mar 7 01:14:22.940452 unknown[1177]: fetched user config from "aws" Mar 7 01:14:22.941490 ignition[1177]: fetch: fetch complete Mar 7 01:14:22.941499 ignition[1177]: fetch: fetch passed Mar 7 01:14:22.941575 ignition[1177]: Ignition finished successfully Mar 7 01:14:22.944426 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:14:22.947813 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:14:22.965480 ignition[1183]: Ignition 2.19.0 Mar 7 01:14:22.965494 ignition[1183]: Stage: kargs Mar 7 01:14:22.965990 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:22.966005 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:22.966127 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:22.967102 ignition[1183]: PUT result: OK Mar 7 01:14:22.970225 ignition[1183]: kargs: kargs passed Mar 7 01:14:22.970323 ignition[1183]: Ignition finished successfully Mar 7 01:14:22.972340 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:14:22.979822 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:14:22.994967 ignition[1190]: Ignition 2.19.0 Mar 7 01:14:22.994980 ignition[1190]: Stage: disks Mar 7 01:14:22.995544 ignition[1190]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:22.995559 ignition[1190]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:22.995700 ignition[1190]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:22.998749 ignition[1190]: PUT result: OK Mar 7 01:14:23.001968 ignition[1190]: disks: disks passed Mar 7 01:14:23.002043 ignition[1190]: Ignition finished successfully Mar 7 01:14:23.003641 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:14:23.004639 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:14:23.005055 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:14:23.005628 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:14:23.006219 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:14:23.006819 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:14:23.017884 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:14:23.049259 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:14:23.053476 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:14:23.057735 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:14:23.173611 kernel: EXT4-fs (nvme0n1p9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:14:23.173933 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:14:23.175074 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:14:23.205732 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:14:23.209729 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:14:23.211688 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:14:23.213020 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:14:23.213113 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:14:23.224226 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:14:23.233232 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1217) Mar 7 01:14:23.233281 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:23.233302 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:23.233321 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:14:23.237853 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:14:23.247614 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:14:23.250064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:14:23.327887 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:14:23.346210 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:14:23.353902 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:14:23.359552 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:14:23.461555 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:14:23.465700 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:14:23.469225 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:14:23.480616 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:23.510115 ignition[1330]: INFO : Ignition 2.19.0 Mar 7 01:14:23.510115 ignition[1330]: INFO : Stage: mount Mar 7 01:14:23.511890 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:23.511890 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:23.511890 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:23.513571 ignition[1330]: INFO : PUT result: OK Mar 7 01:14:23.516622 ignition[1330]: INFO : mount: mount passed Mar 7 01:14:23.516622 ignition[1330]: INFO : Ignition finished successfully Mar 7 01:14:23.517697 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:14:23.526135 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:14:23.527928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:14:23.582870 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:14:23.588864 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:14:23.606617 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1342) Mar 7 01:14:23.609731 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:14:23.609812 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:14:23.612217 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:14:23.617682 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:14:23.619339 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:14:23.648178 ignition[1359]: INFO : Ignition 2.19.0 Mar 7 01:14:23.648911 ignition[1359]: INFO : Stage: files Mar 7 01:14:23.649725 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:23.650337 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:23.650337 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:23.651402 ignition[1359]: INFO : PUT result: OK Mar 7 01:14:23.654287 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:14:23.655053 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:14:23.655053 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:14:23.659791 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:14:23.660804 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:14:23.660804 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:14:23.660378 unknown[1359]: wrote ssh authorized keys file for user: core Mar 7 01:14:23.663622 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:14:23.663622 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 01:14:23.663622 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:14:23.663622 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:14:23.756515 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:14:23.944364 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:14:23.944364 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:14:23.946364 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:14:24.153148 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 7 01:14:24.301447 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:14:24.301447 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:14:24.304237 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:14:24.726927 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 7 01:14:24.738828 systemd-networkd[1169]: eth0: Gained IPv6LL Mar 7 01:14:25.202004 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:14:25.202004 ignition[1359]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 7 01:14:25.204743 ignition[1359]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:14:25.206680 ignition[1359]: INFO : files: files passed Mar 7 01:14:25.206680 ignition[1359]: INFO : Ignition finished successfully Mar 7 01:14:25.207541 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:14:25.218393 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:14:25.222788 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:14:25.225356 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:14:25.225517 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:14:25.246241 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:25.246241 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:25.249886 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:14:25.250016 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:14:25.252061 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:14:25.258775 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:14:25.295902 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:14:25.296048 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:14:25.297638 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:14:25.298457 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:14:25.299408 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:14:25.305795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:14:25.319086 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:14:25.329858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:14:25.341417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:25.342205 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:25.343380 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:14:25.344219 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:14:25.344452 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:14:25.345649 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:14:25.346604 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:14:25.347552 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:14:25.348315 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:14:25.349144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:14:25.349902 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:14:25.350680 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:14:25.351578 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:14:25.352754 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:14:25.353506 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:14:25.354234 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:14:25.354417 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:14:25.355660 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:25.356443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:25.357136 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:14:25.357655 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:25.358289 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:14:25.358460 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:14:25.360035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:14:25.360222 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:14:25.360939 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:14:25.361091 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:14:25.369907 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:14:25.373014 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:14:25.373628 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:14:25.373877 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:25.377336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:14:25.377551 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:14:25.392173 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:14:25.392308 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:14:25.396462 ignition[1412]: INFO : Ignition 2.19.0 Mar 7 01:14:25.396462 ignition[1412]: INFO : Stage: umount Mar 7 01:14:25.396462 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:14:25.396462 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:14:25.396462 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:14:25.400704 ignition[1412]: INFO : PUT result: OK Mar 7 01:14:25.404604 ignition[1412]: INFO : umount: umount passed Mar 7 01:14:25.407214 ignition[1412]: INFO : Ignition finished successfully Mar 7 01:14:25.408058 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:14:25.408217 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:14:25.410284 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:14:25.410406 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:14:25.411614 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:14:25.411680 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:14:25.413238 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:14:25.413299 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:14:25.414697 systemd[1]: Stopped target network.target - Network. Mar 7 01:14:25.415509 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:14:25.415575 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:14:25.416696 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:14:25.417152 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:14:25.421656 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:25.422033 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:14:25.422949 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:14:25.423737 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:14:25.423793 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:14:25.424345 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:14:25.424398 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:14:25.424947 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:14:25.425011 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:14:25.425569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:14:25.425644 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:14:25.426387 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:14:25.427133 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:14:25.429368 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:14:25.430669 systemd-networkd[1169]: eth0: DHCPv6 lease lost Mar 7 01:14:25.432949 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:14:25.433089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:14:25.434822 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:14:25.434952 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:14:25.438035 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:14:25.438100 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:25.443752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:14:25.444359 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:14:25.444478 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:14:25.445773 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:14:25.445843 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:25.446937 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:14:25.447001 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:25.447925 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:14:25.447993 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:25.448580 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:25.463145 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:14:25.463471 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:25.464748 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:14:25.464886 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:14:25.466284 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:14:25.466395 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:25.467749 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:14:25.467817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:25.468540 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:14:25.468722 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:14:25.469747 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:14:25.469818 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:14:25.470912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:14:25.470978 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:14:25.480169 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:14:25.480807 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:14:25.480890 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:25.481572 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:14:25.481646 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:25.482758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:14:25.482819 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:25.487036 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:25.487111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:25.490957 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:14:25.491093 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:14:25.534203 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:14:25.534345 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:14:25.535659 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:14:25.536298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:14:25.536373 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:14:25.540774 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:14:25.553225 systemd[1]: Switching root. Mar 7 01:14:25.580654 systemd-journald[179]: Journal stopped Mar 7 01:14:26.913897 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 7 01:14:26.914007 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:14:26.914032 kernel: SELinux: policy capability open_perms=1 Mar 7 01:14:26.914054 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:14:26.914091 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:14:26.914112 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:14:26.914139 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:14:26.914159 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:14:26.914179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:14:26.914199 kernel: audit: type=1403 audit(1772846065.892:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:14:26.914221 systemd[1]: Successfully loaded SELinux policy in 42.515ms. Mar 7 01:14:26.914261 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.074ms. Mar 7 01:14:26.914286 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:14:26.914308 systemd[1]: Detected virtualization amazon. Mar 7 01:14:26.914330 systemd[1]: Detected architecture x86-64. Mar 7 01:14:26.914355 systemd[1]: Detected first boot. Mar 7 01:14:26.914384 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:14:26.918389 zram_generator::config[1471]: No configuration found. Mar 7 01:14:26.918434 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:14:26.918459 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:14:26.918483 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 01:14:26.918507 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:14:26.918530 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:14:26.918562 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:14:26.918757 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:14:26.918782 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:14:26.918811 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:14:26.918834 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:14:26.918857 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:14:26.918879 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:14:26.918902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:14:26.918928 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:14:26.918950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:14:26.918971 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:14:26.918994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:14:26.919017 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:14:26.919034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:14:26.919054 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:14:26.919073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:14:26.919095 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:14:26.919121 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:14:26.919144 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:14:26.919164 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:14:26.919183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:14:26.919202 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:14:26.919234 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:14:26.919252 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:14:26.919271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:14:26.919290 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:14:26.919314 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:14:26.919332 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:14:26.919351 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:14:26.919371 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:14:26.919389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:26.919407 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:14:26.919426 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:14:26.919444 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:14:26.919462 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:14:26.919484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:26.919503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:14:26.919521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:14:26.919540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:26.919558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:14:26.919576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:26.928265 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:14:26.928301 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:14:26.928333 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:14:26.928359 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 01:14:26.928383 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 01:14:26.928407 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:14:26.928430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:14:26.928454 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:14:26.928520 systemd-journald[1575]: Collecting audit messages is disabled. Mar 7 01:14:26.928565 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:14:26.928661 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:14:26.928687 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:26.928712 systemd-journald[1575]: Journal started Mar 7 01:14:26.928758 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2ac4c4f2b780664d09a3420382c237) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:14:26.939614 kernel: loop: module loaded Mar 7 01:14:26.941665 kernel: ACPI: bus type drm_connector registered Mar 7 01:14:26.953696 kernel: fuse: init (API version 7.39) Mar 7 01:14:26.953784 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:14:26.949342 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:14:26.950262 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:14:26.951100 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:14:26.952008 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:14:26.955804 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:14:26.957230 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:14:26.958331 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:14:26.960474 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:14:26.961471 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:14:26.962183 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:14:26.963158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:26.963393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:26.966315 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:14:26.966570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:14:26.967780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:26.968134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:26.969417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:14:26.969823 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:14:26.970969 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:14:26.971371 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:14:26.972683 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:14:26.973996 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:14:26.975447 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:14:26.992092 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:14:26.998744 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:14:27.005720 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:14:27.008035 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:14:27.016677 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:14:27.024812 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:14:27.029704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:14:27.042666 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:14:27.046623 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:14:27.058925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:14:27.063642 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2ac4c4f2b780664d09a3420382c237 is 79.589ms for 973 entries. Mar 7 01:14:27.063642 systemd-journald[1575]: System Journal (/var/log/journal/ec2ac4c4f2b780664d09a3420382c237) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:14:27.153310 systemd-journald[1575]: Received client request to flush runtime journal. Mar 7 01:14:27.069814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:14:27.087232 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:14:27.087994 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:14:27.113987 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:14:27.115939 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:14:27.121174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:14:27.134423 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:14:27.160217 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:14:27.172195 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:14:27.182407 udevadm[1632]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:14:27.189287 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Mar 7 01:14:27.189315 systemd-tmpfiles[1623]: ACLs are not supported, ignoring. Mar 7 01:14:27.196755 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:14:27.210868 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:14:27.260051 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:14:27.268883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:14:27.296560 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Mar 7 01:14:27.296608 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Mar 7 01:14:27.305951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:14:27.767867 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:14:27.774812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:14:27.809112 systemd-udevd[1651]: Using default interface naming scheme 'v255'. Mar 7 01:14:27.848399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:14:27.861786 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:14:27.887864 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:14:27.947534 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 01:14:27.977814 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:14:28.005962 (udev-worker)[1658]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:14:28.020629 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:14:28.037744 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:14:28.037817 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 7 01:14:28.056606 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:14:28.066627 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:14:28.119612 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:14:28.121088 systemd-networkd[1654]: lo: Link UP Mar 7 01:14:28.121572 systemd-networkd[1654]: lo: Gained carrier Mar 7 01:14:28.124297 systemd-networkd[1654]: Enumeration completed Mar 7 01:14:28.125145 systemd-networkd[1654]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:28.125289 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:14:28.126622 systemd-networkd[1654]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:14:28.130853 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:14:28.131040 systemd-networkd[1654]: eth0: Link UP Mar 7 01:14:28.131233 systemd-networkd[1654]: eth0: Gained carrier Mar 7 01:14:28.131261 systemd-networkd[1654]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:28.145614 systemd-networkd[1654]: eth0: DHCPv4 address 172.31.16.11/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:14:28.150973 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:14:28.156915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:28.160347 systemd-networkd[1654]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:14:28.176040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:14:28.176387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:28.181889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:14:28.205617 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1657) Mar 7 01:14:28.326627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:14:28.341947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:14:28.343839 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:14:28.372798 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:14:28.389007 lvm[1777]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:14:28.416276 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:14:28.418213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:14:28.424813 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:14:28.431455 lvm[1780]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:14:28.461014 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:14:28.462733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:14:28.463493 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:14:28.463535 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:14:28.464163 systemd[1]: Reached target machines.target - Containers. Mar 7 01:14:28.466178 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:14:28.474796 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:14:28.476981 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:14:28.477565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:28.487946 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:14:28.503848 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:14:28.510043 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:14:28.513008 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:14:28.519869 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:14:28.542379 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:14:28.544865 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:14:28.549632 kernel: loop0: detected capacity change from 0 to 142488 Mar 7 01:14:28.609605 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:14:28.632606 kernel: loop1: detected capacity change from 0 to 140768 Mar 7 01:14:28.694626 kernel: loop2: detected capacity change from 0 to 61336 Mar 7 01:14:28.803612 kernel: loop3: detected capacity change from 0 to 228704 Mar 7 01:14:28.914746 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:14:28.961833 kernel: loop5: detected capacity change from 0 to 140768 Mar 7 01:14:29.001627 kernel: loop6: detected capacity change from 0 to 61336 Mar 7 01:14:29.024613 kernel: loop7: detected capacity change from 0 to 228704 Mar 7 01:14:29.056852 (sd-merge)[1803]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 01:14:29.057563 (sd-merge)[1803]: Merged extensions into '/usr'. Mar 7 01:14:29.063978 systemd[1]: Reloading requested from client PID 1789 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:14:29.063998 systemd[1]: Reloading... Mar 7 01:14:29.172615 zram_generator::config[1831]: No configuration found. Mar 7 01:14:29.192933 ldconfig[1784]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:14:29.319122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:29.400205 systemd[1]: Reloading finished in 335 ms. Mar 7 01:14:29.419861 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:14:29.420975 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:14:29.434771 systemd[1]: Starting ensure-sysext.service... Mar 7 01:14:29.438803 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:14:29.444675 systemd[1]: Reloading requested from client PID 1890 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:14:29.444695 systemd[1]: Reloading... Mar 7 01:14:29.476248 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:14:29.477222 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:14:29.478683 systemd-tmpfiles[1891]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:14:29.479128 systemd-tmpfiles[1891]: ACLs are not supported, ignoring. Mar 7 01:14:29.479431 systemd-tmpfiles[1891]: ACLs are not supported, ignoring. Mar 7 01:14:29.485412 systemd-tmpfiles[1891]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:14:29.485430 systemd-tmpfiles[1891]: Skipping /boot Mar 7 01:14:29.502564 systemd-tmpfiles[1891]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:14:29.502671 systemd-tmpfiles[1891]: Skipping /boot Mar 7 01:14:29.556613 zram_generator::config[1919]: No configuration found. Mar 7 01:14:29.690382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:29.766545 systemd[1]: Reloading finished in 321 ms. Mar 7 01:14:29.792426 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:14:29.805816 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:14:29.811793 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:14:29.819897 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:14:29.827383 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:14:29.834861 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:14:29.856881 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:29.857321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:29.864657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:29.875006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:29.884407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:14:29.888805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:29.889012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:29.897421 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:29.902910 augenrules[2005]: No rules Mar 7 01:14:29.904271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:29.914846 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:14:29.917345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:29.918659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:29.927210 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:14:29.927913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:14:29.932207 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:14:29.957231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:14:29.967626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:29.968040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:14:29.976094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:14:29.983940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:14:29.993922 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:14:30.007245 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:14:30.009660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:14:30.010141 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:14:30.027600 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:14:30.028728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:14:30.031890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:14:30.032138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:14:30.036699 systemd-resolved[1987]: Positive Trust Anchors: Mar 7 01:14:30.036715 systemd-resolved[1987]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:14:30.036766 systemd-resolved[1987]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:14:30.038349 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:14:30.039394 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:14:30.040755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:14:30.040991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:14:30.042253 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:14:30.042494 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:14:30.052915 systemd-resolved[1987]: Defaulting to hostname 'linux'. Mar 7 01:14:30.057455 systemd[1]: Finished ensure-sysext.service. Mar 7 01:14:30.059433 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:14:30.065085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:14:30.068076 systemd[1]: Reached target network.target - Network. Mar 7 01:14:30.069580 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:14:30.070562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:14:30.070784 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:14:30.079827 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:14:30.080694 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:14:30.080747 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:14:30.081409 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:14:30.082040 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:14:30.082987 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:14:30.083665 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:14:30.084183 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:14:30.084724 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:14:30.084771 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:14:30.085227 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:14:30.086269 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:14:30.088337 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:14:30.090089 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:14:30.093849 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:14:30.094553 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:14:30.095120 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:14:30.096001 systemd[1]: System is tainted: cgroupsv1 Mar 7 01:14:30.096061 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:14:30.096098 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:14:30.098750 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:14:30.103796 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:14:30.114714 systemd-networkd[1654]: eth0: Gained IPv6LL Mar 7 01:14:30.115960 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:14:30.123320 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:14:30.134801 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:14:30.136222 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:14:30.139788 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:14:30.144721 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:14:30.153932 jq[2049]: false Mar 7 01:14:30.170045 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:14:30.185728 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:14:30.194830 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:14:30.215862 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:14:30.225170 dbus-daemon[2046]: [system] SELinux support is enabled Mar 7 01:14:30.227310 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:14:30.230141 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:14:30.234958 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:14:30.240229 dbus-daemon[2046]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1654 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:14:30.249147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:14:30.264879 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:14:30.272923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:14:30.286402 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:14:30.288824 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:14:30.306093 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:14:30.306444 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: ---------------------------------------------------- Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: corporation. Support and training for ntp-4 are Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: available at https://www.nwtime.org/support Mar 7 01:14:30.322866 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: ---------------------------------------------------- Mar 7 01:14:30.321201 ntpd[2052]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:14:30.336164 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: proto: precision = 0.096 usec (-23) Mar 7 01:14:30.336164 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: basedate set to 2026-02-22 Mar 7 01:14:30.336164 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: gps base set to 2026-02-22 (week 2407) Mar 7 01:14:30.336296 jq[2065]: true Mar 7 01:14:30.336499 extend-filesystems[2050]: Found loop4 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found loop5 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found loop6 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found loop7 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p1 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p2 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p3 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found usr Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p4 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p6 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p7 Mar 7 01:14:30.336499 extend-filesystems[2050]: Found nvme0n1p9 Mar 7 01:14:30.336499 extend-filesystems[2050]: Checking size of /dev/nvme0n1p9 Mar 7 01:14:30.321234 ntpd[2052]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:14:30.373833 (ntainerd)[2090]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:14:30.390961 update_engine[2064]: I20260307 01:14:30.357540 2064 main.cc:92] Flatcar Update Engine starting Mar 7 01:14:30.390961 update_engine[2064]: I20260307 01:14:30.371997 2064 update_check_scheduler.cc:74] Next update check in 11m25s Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listen normally on 3 eth0 172.31.16.11:123 Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listen normally on 4 lo [::1]:123 Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listen normally on 5 eth0 [fe80::42b:13ff:fe14:53bb%2]:123 Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:14:30.391394 ntpd[2052]: 7 Mar 01:14:30 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:14:30.321246 ntpd[2052]: ---------------------------------------------------- Mar 7 01:14:30.380565 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:14:30.321256 ntpd[2052]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:14:30.388946 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:14:30.321266 ntpd[2052]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:14:30.321276 ntpd[2052]: corporation. Support and training for ntp-4 are Mar 7 01:14:30.321285 ntpd[2052]: available at https://www.nwtime.org/support Mar 7 01:14:30.321296 ntpd[2052]: ---------------------------------------------------- Mar 7 01:14:30.328149 ntpd[2052]: proto: precision = 0.096 usec (-23) Mar 7 01:14:30.398677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:30.329924 ntpd[2052]: basedate set to 2026-02-22 Mar 7 01:14:30.329949 ntpd[2052]: gps base set to 2026-02-22 (week 2407) Mar 7 01:14:30.338276 ntpd[2052]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:14:30.338335 ntpd[2052]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:14:30.338571 ntpd[2052]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:14:30.339519 ntpd[2052]: Listen normally on 3 eth0 172.31.16.11:123 Mar 7 01:14:30.339576 ntpd[2052]: Listen normally on 4 lo [::1]:123 Mar 7 01:14:30.339669 ntpd[2052]: Listen normally on 5 eth0 [fe80::42b:13ff:fe14:53bb%2]:123 Mar 7 01:14:30.339713 ntpd[2052]: Listening on routing socket on fd #22 for interface updates Mar 7 01:14:30.357705 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:14:30.357736 ntpd[2052]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:14:30.377170 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:14:30.408731 jq[2086]: true Mar 7 01:14:30.417782 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:14:30.423009 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:14:30.423060 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:14:30.445797 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:14:30.447848 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:14:30.447896 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:14:30.454696 extend-filesystems[2050]: Resized partition /dev/nvme0n1p9 Mar 7 01:14:30.463071 extend-filesystems[2109]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:14:30.473169 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 01:14:30.472721 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:14:30.474810 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:14:30.476454 coreos-metadata[2045]: Mar 07 01:14:30.476 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:14:30.481506 systemd-logind[2063]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:14:30.483026 systemd-logind[2063]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 7 01:14:30.489226 coreos-metadata[2045]: Mar 07 01:14:30.484 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 01:14:30.483056 systemd-logind[2063]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:14:30.483735 systemd-logind[2063]: New seat seat0. Mar 7 01:14:30.490259 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:14:30.491987 coreos-metadata[2045]: Mar 07 01:14:30.491 INFO Fetch successful Mar 7 01:14:30.491987 coreos-metadata[2045]: Mar 07 01:14:30.491 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 01:14:30.492409 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:14:30.493284 coreos-metadata[2045]: Mar 07 01:14:30.492 INFO Fetch successful Mar 7 01:14:30.493284 coreos-metadata[2045]: Mar 07 01:14:30.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 01:14:30.493993 coreos-metadata[2045]: Mar 07 01:14:30.493 INFO Fetch successful Mar 7 01:14:30.493993 coreos-metadata[2045]: Mar 07 01:14:30.493 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 01:14:30.497998 coreos-metadata[2045]: Mar 07 01:14:30.495 INFO Fetch successful Mar 7 01:14:30.497998 coreos-metadata[2045]: Mar 07 01:14:30.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 01:14:30.499604 coreos-metadata[2045]: Mar 07 01:14:30.498 INFO Fetch failed with 404: resource not found Mar 7 01:14:30.499604 coreos-metadata[2045]: Mar 07 01:14:30.498 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 01:14:30.500300 coreos-metadata[2045]: Mar 07 01:14:30.500 INFO Fetch successful Mar 7 01:14:30.500300 coreos-metadata[2045]: Mar 07 01:14:30.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 01:14:30.500757 coreos-metadata[2045]: Mar 07 01:14:30.500 INFO Fetch successful Mar 7 01:14:30.500757 coreos-metadata[2045]: Mar 07 01:14:30.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 01:14:30.501472 coreos-metadata[2045]: Mar 07 01:14:30.501 INFO Fetch successful Mar 7 01:14:30.501472 coreos-metadata[2045]: Mar 07 01:14:30.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 01:14:30.502474 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:14:30.503907 coreos-metadata[2045]: Mar 07 01:14:30.503 INFO Fetch successful Mar 7 01:14:30.503907 coreos-metadata[2045]: Mar 07 01:14:30.503 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 01:14:30.504828 coreos-metadata[2045]: Mar 07 01:14:30.504 INFO Fetch successful Mar 7 01:14:30.546450 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:14:30.567662 tar[2076]: linux-amd64/LICENSE Mar 7 01:14:30.569987 tar[2076]: linux-amd64/helm Mar 7 01:14:30.591321 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 01:14:30.658269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:14:30.663693 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:14:30.664777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:14:30.820559 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 01:14:30.841967 amazon-ssm-agent[2127]: Initializing new seelog logger Mar 7 01:14:30.842370 amazon-ssm-agent[2127]: New Seelog Logger Creation Complete Mar 7 01:14:30.842370 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.842370 amazon-ssm-agent[2127]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.847618 extend-filesystems[2109]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 01:14:30.847618 extend-filesystems[2109]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:14:30.847618 extend-filesystems[2109]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 processing appconfig overrides Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 processing appconfig overrides Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 processing appconfig overrides Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026-03-07 01:14:30 INFO Proxy environment variables: Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:14:30.875293 amazon-ssm-agent[2127]: 2026/03/07 01:14:30 processing appconfig overrides Mar 7 01:14:30.876047 bash[2159]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:14:30.848929 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:14:30.876248 extend-filesystems[2050]: Resized filesystem in /dev/nvme0n1p9 Mar 7 01:14:30.849275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:14:30.870438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:14:30.886988 systemd[1]: Starting sshkeys.service... Mar 7 01:14:30.902576 sshd_keygen[2099]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:14:30.921243 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1657) Mar 7 01:14:30.967608 amazon-ssm-agent[2127]: 2026-03-07 01:14:30 INFO http_proxy: Mar 7 01:14:30.970282 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:14:30.980004 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:14:31.073937 amazon-ssm-agent[2127]: 2026-03-07 01:14:30 INFO no_proxy: Mar 7 01:14:31.090147 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:14:31.108079 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:14:31.113039 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:14:31.113476 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:14:31.114101 dbus-daemon[2046]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2105 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:14:31.127101 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:14:31.140752 coreos-metadata[2184]: Mar 07 01:14:31.140 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:14:31.142540 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:14:31.142943 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:14:31.147902 coreos-metadata[2184]: Mar 07 01:14:31.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 01:14:31.155045 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:14:31.164572 coreos-metadata[2184]: Mar 07 01:14:31.164 INFO Fetch successful Mar 7 01:14:31.164572 coreos-metadata[2184]: Mar 07 01:14:31.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 01:14:31.168438 coreos-metadata[2184]: Mar 07 01:14:31.166 INFO Fetch successful Mar 7 01:14:31.166930 locksmithd[2110]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:14:31.171693 amazon-ssm-agent[2127]: 2026-03-07 01:14:30 INFO https_proxy: Mar 7 01:14:31.178297 unknown[2184]: wrote ssh authorized keys file for user: core Mar 7 01:14:31.197415 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:14:31.203543 polkitd[2231]: Started polkitd version 121 Mar 7 01:14:31.250055 polkitd[2231]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:14:31.252207 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:14:31.254172 polkitd[2231]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:14:31.259816 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:14:31.263964 polkitd[2231]: Finished loading, compiling and executing 2 rules Mar 7 01:14:31.266819 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:14:31.268787 update-ssh-keys[2249]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:14:31.281075 dbus-daemon[2046]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:14:31.293970 amazon-ssm-agent[2127]: 2026-03-07 01:14:30 INFO Checking if agent identity type OnPrem can be assumed Mar 7 01:14:31.288025 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:14:31.294770 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:14:31.295276 polkitd[2231]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:14:31.303402 systemd[1]: Finished sshkeys.service. Mar 7 01:14:31.390844 amazon-ssm-agent[2127]: 2026-03-07 01:14:30 INFO Checking if agent identity type EC2 can be assumed Mar 7 01:14:31.408843 systemd-resolved[1987]: System hostname changed to 'ip-172-31-16-11'. Mar 7 01:14:31.408847 systemd-hostnamed[2105]: Hostname set to (transient) Mar 7 01:14:31.492042 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO Agent will take identity from EC2 Mar 7 01:14:31.530606 containerd[2090]: time="2026-03-07T01:14:31.529550056Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:14:31.591619 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:14:31.665113 containerd[2090]: time="2026-03-07T01:14:31.664986962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670347630Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670405385Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670430895Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670652571Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670677149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670748863Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.670767252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.671068479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.671090090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.671112291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:31.671540 containerd[2090]: time="2026-03-07T01:14:31.671127736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.672071 containerd[2090]: time="2026-03-07T01:14:31.671235242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.672071 containerd[2090]: time="2026-03-07T01:14:31.671478544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:14:31.673650 containerd[2090]: time="2026-03-07T01:14:31.672920718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:14:31.673650 containerd[2090]: time="2026-03-07T01:14:31.672955755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:14:31.673650 containerd[2090]: time="2026-03-07T01:14:31.673078650Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:14:31.673650 containerd[2090]: time="2026-03-07T01:14:31.673133798Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:14:31.678390 containerd[2090]: time="2026-03-07T01:14:31.678331788Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:14:31.679355 containerd[2090]: time="2026-03-07T01:14:31.678615621Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:14:31.679355 containerd[2090]: time="2026-03-07T01:14:31.678648048Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:14:31.679355 containerd[2090]: time="2026-03-07T01:14:31.679130525Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:14:31.679355 containerd[2090]: time="2026-03-07T01:14:31.679163452Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:14:31.679681 containerd[2090]: time="2026-03-07T01:14:31.679632701Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:14:31.681680 containerd[2090]: time="2026-03-07T01:14:31.681510826Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:14:31.681933 containerd[2090]: time="2026-03-07T01:14:31.681912853Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682356859Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682387200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682424993Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682445660Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682465339Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682500191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682522068Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682542068Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682597495Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.682656 containerd[2090]: time="2026-03-07T01:14:31.682617105Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.682863462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.682908909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.682994594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.683018498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.683037412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.683317639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.683338915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.683606 containerd[2090]: time="2026-03-07T01:14:31.683360103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683802401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683832659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683868687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683888103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683907671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683949515Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.683985372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.684443607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.684466784Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:14:31.684819 containerd[2090]: time="2026-03-07T01:14:31.684548692Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.684575254Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.685296383Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.685319530Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.685350619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.685371049Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.685391385Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:14:31.686979 containerd[2090]: time="2026-03-07T01:14:31.685407659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:14:31.687529 containerd[2090]: time="2026-03-07T01:14:31.686949871Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:14:31.687529 containerd[2090]: time="2026-03-07T01:14:31.687388273Z" level=info msg="Connect containerd service" Mar 7 01:14:31.687529 containerd[2090]: time="2026-03-07T01:14:31.687454824Z" level=info msg="using legacy CRI server" Mar 7 01:14:31.687529 containerd[2090]: time="2026-03-07T01:14:31.687465789Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:14:31.688460 containerd[2090]: time="2026-03-07T01:14:31.687987843Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:14:31.690206 containerd[2090]: time="2026-03-07T01:14:31.690181205Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:14:31.691320 containerd[2090]: time="2026-03-07T01:14:31.690793152Z" level=info msg="Start subscribing containerd event" Mar 7 01:14:31.691320 containerd[2090]: time="2026-03-07T01:14:31.690862910Z" level=info msg="Start recovering state" Mar 7 01:14:31.691320 containerd[2090]: time="2026-03-07T01:14:31.690948531Z" level=info msg="Start event monitor" Mar 7 01:14:31.691320 containerd[2090]: time="2026-03-07T01:14:31.690972030Z" level=info msg="Start snapshots syncer" Mar 7 01:14:31.691320 containerd[2090]: time="2026-03-07T01:14:31.690984631Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:14:31.691320 containerd[2090]: time="2026-03-07T01:14:31.690996194Z" level=info msg="Start streaming server" Mar 7 01:14:31.691970 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:14:31.692573 containerd[2090]: time="2026-03-07T01:14:31.692244818Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:14:31.692573 containerd[2090]: time="2026-03-07T01:14:31.692299924Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:14:31.693123 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:14:31.695472 containerd[2090]: time="2026-03-07T01:14:31.693152097Z" level=info msg="containerd successfully booted in 0.167087s" Mar 7 01:14:31.791164 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:14:31.890452 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 01:14:31.991577 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 7 01:14:32.092776 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [Registrar] Starting registrar module Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:31 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:32 INFO [EC2Identity] EC2 registration was successful. Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:32 INFO [CredentialRefresher] credentialRefresher has started Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:32 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 01:14:32.112215 amazon-ssm-agent[2127]: 2026-03-07 01:14:32 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 01:14:32.134757 tar[2076]: linux-amd64/README.md Mar 7 01:14:32.148848 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:14:32.192667 amazon-ssm-agent[2127]: 2026-03-07 01:14:32 INFO [CredentialRefresher] Next credential rotation will be in 30.50832524245 minutes Mar 7 01:14:33.125835 amazon-ssm-agent[2127]: 2026-03-07 01:14:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 01:14:33.227607 amazon-ssm-agent[2127]: 2026-03-07 01:14:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2320) started Mar 7 01:14:33.327721 amazon-ssm-agent[2127]: 2026-03-07 01:14:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 01:14:33.328916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:33.330251 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:14:33.335774 systemd[1]: Startup finished in 6.804s (kernel) + 7.482s (userspace) = 14.287s. Mar 7 01:14:33.337194 (kubelet)[2337]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:34.254921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:14:34.263528 systemd[1]: Started sshd@0-172.31.16.11:22-68.220.241.50:58302.service - OpenSSH per-connection server daemon (68.220.241.50:58302). Mar 7 01:14:34.380979 kubelet[2337]: E0307 01:14:34.380886 2337 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:34.383705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:34.383975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:34.757781 sshd[2347]: Accepted publickey for core from 68.220.241.50 port 58302 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:34.759955 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:34.772001 systemd-logind[2063]: New session 1 of user core. Mar 7 01:14:34.773189 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:14:34.778284 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:14:34.796012 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:14:34.805050 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:14:34.810402 (systemd)[2357]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:14:34.927100 systemd[2357]: Queued start job for default target default.target. Mar 7 01:14:34.927650 systemd[2357]: Created slice app.slice - User Application Slice. Mar 7 01:14:34.927684 systemd[2357]: Reached target paths.target - Paths. Mar 7 01:14:34.927703 systemd[2357]: Reached target timers.target - Timers. Mar 7 01:14:34.932741 systemd[2357]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:14:34.941479 systemd[2357]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:14:34.941565 systemd[2357]: Reached target sockets.target - Sockets. Mar 7 01:14:34.941609 systemd[2357]: Reached target basic.target - Basic System. Mar 7 01:14:34.941666 systemd[2357]: Reached target default.target - Main User Target. Mar 7 01:14:34.941704 systemd[2357]: Startup finished in 124ms. Mar 7 01:14:34.943631 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:14:34.949989 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:14:35.315042 systemd[1]: Started sshd@1-172.31.16.11:22-68.220.241.50:58312.service - OpenSSH per-connection server daemon (68.220.241.50:58312). Mar 7 01:14:35.797201 sshd[2369]: Accepted publickey for core from 68.220.241.50 port 58312 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:35.798902 sshd[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:35.803402 systemd-logind[2063]: New session 2 of user core. Mar 7 01:14:35.811054 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:14:36.147785 sshd[2369]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:36.153251 systemd[1]: sshd@1-172.31.16.11:22-68.220.241.50:58312.service: Deactivated successfully. Mar 7 01:14:36.153666 systemd-logind[2063]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:14:36.157809 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:14:36.158793 systemd-logind[2063]: Removed session 2. Mar 7 01:14:36.232010 systemd[1]: Started sshd@2-172.31.16.11:22-68.220.241.50:58316.service - OpenSSH per-connection server daemon (68.220.241.50:58316). Mar 7 01:14:36.721851 sshd[2377]: Accepted publickey for core from 68.220.241.50 port 58316 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:36.723470 sshd[2377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:36.728625 systemd-logind[2063]: New session 3 of user core. Mar 7 01:14:36.734910 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:14:37.073945 sshd[2377]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:37.077689 systemd[1]: sshd@2-172.31.16.11:22-68.220.241.50:58316.service: Deactivated successfully. Mar 7 01:14:37.082732 systemd-logind[2063]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:14:37.083496 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:14:37.084901 systemd-logind[2063]: Removed session 3. Mar 7 01:14:37.165026 systemd[1]: Started sshd@3-172.31.16.11:22-68.220.241.50:58326.service - OpenSSH per-connection server daemon (68.220.241.50:58326). Mar 7 01:14:38.400837 systemd-resolved[1987]: Clock change detected. Flushing caches. Mar 7 01:14:38.733727 sshd[2385]: Accepted publickey for core from 68.220.241.50 port 58326 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:38.734961 sshd[2385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:38.739558 systemd-logind[2063]: New session 4 of user core. Mar 7 01:14:38.743033 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:14:39.091605 sshd[2385]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:39.095074 systemd[1]: sshd@3-172.31.16.11:22-68.220.241.50:58326.service: Deactivated successfully. Mar 7 01:14:39.101024 systemd-logind[2063]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:14:39.101175 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:14:39.102580 systemd-logind[2063]: Removed session 4. Mar 7 01:14:39.182099 systemd[1]: Started sshd@4-172.31.16.11:22-68.220.241.50:58330.service - OpenSSH per-connection server daemon (68.220.241.50:58330). Mar 7 01:14:39.672813 sshd[2393]: Accepted publickey for core from 68.220.241.50 port 58330 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:39.674400 sshd[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:39.679630 systemd-logind[2063]: New session 5 of user core. Mar 7 01:14:39.691156 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:14:39.965430 sudo[2397]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:14:39.965890 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:39.979458 sudo[2397]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:40.058422 sshd[2393]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:40.062224 systemd[1]: sshd@4-172.31.16.11:22-68.220.241.50:58330.service: Deactivated successfully. Mar 7 01:14:40.067494 systemd-logind[2063]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:14:40.068373 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:14:40.069978 systemd-logind[2063]: Removed session 5. Mar 7 01:14:40.141061 systemd[1]: Started sshd@5-172.31.16.11:22-68.220.241.50:58332.service - OpenSSH per-connection server daemon (68.220.241.50:58332). Mar 7 01:14:40.626499 sshd[2402]: Accepted publickey for core from 68.220.241.50 port 58332 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:40.627202 sshd[2402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:40.632402 systemd-logind[2063]: New session 6 of user core. Mar 7 01:14:40.642182 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:14:40.901870 sudo[2407]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:14:40.902269 sudo[2407]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:40.906568 sudo[2407]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:40.912107 sudo[2406]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:14:40.912491 sudo[2406]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:40.939149 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:14:40.958086 auditctl[2410]: No rules Mar 7 01:14:40.958839 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:14:40.959199 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:14:40.968102 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:14:40.996025 augenrules[2429]: No rules Mar 7 01:14:40.997944 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:14:41.001290 sudo[2406]: pam_unix(sudo:session): session closed for user root Mar 7 01:14:41.079333 sshd[2402]: pam_unix(sshd:session): session closed for user core Mar 7 01:14:41.085036 systemd[1]: sshd@5-172.31.16.11:22-68.220.241.50:58332.service: Deactivated successfully. Mar 7 01:14:41.088088 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:14:41.089502 systemd-logind[2063]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:14:41.091369 systemd-logind[2063]: Removed session 6. Mar 7 01:14:41.165063 systemd[1]: Started sshd@6-172.31.16.11:22-68.220.241.50:58334.service - OpenSSH per-connection server daemon (68.220.241.50:58334). Mar 7 01:14:41.647870 sshd[2438]: Accepted publickey for core from 68.220.241.50 port 58334 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:14:41.648818 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:14:41.653483 systemd-logind[2063]: New session 7 of user core. Mar 7 01:14:41.662033 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:14:41.922787 sudo[2442]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:14:41.923191 sudo[2442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:14:42.299054 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:14:42.300647 (dockerd)[2458]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:14:42.660796 dockerd[2458]: time="2026-03-07T01:14:42.660402533Z" level=info msg="Starting up" Mar 7 01:14:43.437287 dockerd[2458]: time="2026-03-07T01:14:43.437236284Z" level=info msg="Loading containers: start." Mar 7 01:14:43.562005 kernel: Initializing XFRM netlink socket Mar 7 01:14:43.591249 (udev-worker)[2480]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:14:43.658583 systemd-networkd[1654]: docker0: Link UP Mar 7 01:14:43.680235 dockerd[2458]: time="2026-03-07T01:14:43.680188992Z" level=info msg="Loading containers: done." Mar 7 01:14:43.723236 dockerd[2458]: time="2026-03-07T01:14:43.723041351Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:14:43.723236 dockerd[2458]: time="2026-03-07T01:14:43.723214688Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:14:43.723510 dockerd[2458]: time="2026-03-07T01:14:43.723362139Z" level=info msg="Daemon has completed initialization" Mar 7 01:14:43.773273 dockerd[2458]: time="2026-03-07T01:14:43.773001397Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:14:43.773159 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:14:44.694850 containerd[2090]: time="2026-03-07T01:14:44.694811801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:14:45.300078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263087678.mount: Deactivated successfully. Mar 7 01:14:45.712471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:14:45.720301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:46.203902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:46.216465 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:46.271484 kubelet[2656]: E0307 01:14:46.271380 2656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:46.275434 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:46.275678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:47.079033 containerd[2090]: time="2026-03-07T01:14:47.078949264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:47.081054 containerd[2090]: time="2026-03-07T01:14:47.080849553Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:14:47.083422 containerd[2090]: time="2026-03-07T01:14:47.083143146Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:47.087240 containerd[2090]: time="2026-03-07T01:14:47.087183293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:47.088665 containerd[2090]: time="2026-03-07T01:14:47.088420508Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.393562369s" Mar 7 01:14:47.088665 containerd[2090]: time="2026-03-07T01:14:47.088467957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:14:47.089439 containerd[2090]: time="2026-03-07T01:14:47.089222841Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:14:48.625254 containerd[2090]: time="2026-03-07T01:14:48.625197891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:48.627504 containerd[2090]: time="2026-03-07T01:14:48.627290299Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:14:48.629833 containerd[2090]: time="2026-03-07T01:14:48.629788164Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:48.634273 containerd[2090]: time="2026-03-07T01:14:48.634195483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:48.635550 containerd[2090]: time="2026-03-07T01:14:48.635381220Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.545835408s" Mar 7 01:14:48.635550 containerd[2090]: time="2026-03-07T01:14:48.635423898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:14:48.636264 containerd[2090]: time="2026-03-07T01:14:48.636062907Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:14:49.949843 containerd[2090]: time="2026-03-07T01:14:49.949786302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:49.955332 containerd[2090]: time="2026-03-07T01:14:49.955259413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:14:49.960894 containerd[2090]: time="2026-03-07T01:14:49.960815070Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:49.970602 containerd[2090]: time="2026-03-07T01:14:49.970532305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:49.972046 containerd[2090]: time="2026-03-07T01:14:49.971410784Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.335310067s" Mar 7 01:14:49.972046 containerd[2090]: time="2026-03-07T01:14:49.971453224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:14:49.972447 containerd[2090]: time="2026-03-07T01:14:49.972422579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:14:51.042391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586278483.mount: Deactivated successfully. Mar 7 01:14:51.660389 containerd[2090]: time="2026-03-07T01:14:51.660327150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:51.662463 containerd[2090]: time="2026-03-07T01:14:51.662258213Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:14:51.664932 containerd[2090]: time="2026-03-07T01:14:51.664590038Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:51.668300 containerd[2090]: time="2026-03-07T01:14:51.668258988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:51.668927 containerd[2090]: time="2026-03-07T01:14:51.668891297Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.696357389s" Mar 7 01:14:51.669014 containerd[2090]: time="2026-03-07T01:14:51.668934818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:14:51.669738 containerd[2090]: time="2026-03-07T01:14:51.669686628Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:14:52.237136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042647896.mount: Deactivated successfully. Mar 7 01:14:53.374862 containerd[2090]: time="2026-03-07T01:14:53.374803916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:53.376351 containerd[2090]: time="2026-03-07T01:14:53.376299362Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:14:53.377731 containerd[2090]: time="2026-03-07T01:14:53.377641173Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:53.381243 containerd[2090]: time="2026-03-07T01:14:53.380662665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:53.382350 containerd[2090]: time="2026-03-07T01:14:53.382306962Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.712462516s" Mar 7 01:14:53.382454 containerd[2090]: time="2026-03-07T01:14:53.382360253Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:14:53.383710 containerd[2090]: time="2026-03-07T01:14:53.383666684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:14:53.879216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626644640.mount: Deactivated successfully. Mar 7 01:14:53.890374 containerd[2090]: time="2026-03-07T01:14:53.890301494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:53.892265 containerd[2090]: time="2026-03-07T01:14:53.892201797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:14:53.894623 containerd[2090]: time="2026-03-07T01:14:53.894560484Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:53.898067 containerd[2090]: time="2026-03-07T01:14:53.898010848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:53.898982 containerd[2090]: time="2026-03-07T01:14:53.898814805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 515.092708ms" Mar 7 01:14:53.898982 containerd[2090]: time="2026-03-07T01:14:53.898854053Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:14:53.899455 containerd[2090]: time="2026-03-07T01:14:53.899278020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:14:54.438618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457465100.mount: Deactivated successfully. Mar 7 01:14:55.781187 containerd[2090]: time="2026-03-07T01:14:55.781131851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:55.783247 containerd[2090]: time="2026-03-07T01:14:55.782971423Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:14:55.785857 containerd[2090]: time="2026-03-07T01:14:55.785237812Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:55.789791 containerd[2090]: time="2026-03-07T01:14:55.789744179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:55.791023 containerd[2090]: time="2026-03-07T01:14:55.790986076Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.891670399s" Mar 7 01:14:55.791164 containerd[2090]: time="2026-03-07T01:14:55.791145002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:14:56.487349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:14:56.497977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:56.784894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:56.800241 (kubelet)[2841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:56.866922 kubelet[2841]: E0307 01:14:56.866862 2841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:56.869873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:56.870106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:59.524975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:59.532029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:59.573335 systemd[1]: Reloading requested from client PID 2859 ('systemctl') (unit session-7.scope)... Mar 7 01:14:59.573356 systemd[1]: Reloading... Mar 7 01:14:59.700917 zram_generator::config[2901]: No configuration found. Mar 7 01:14:59.866304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:14:59.951115 systemd[1]: Reloading finished in 376 ms. Mar 7 01:14:59.998281 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:14:59.998396 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:14:59.998785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:00.004741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:00.278008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:00.286234 (kubelet)[2972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:00.342410 kubelet[2972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:00.342410 kubelet[2972]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:00.342410 kubelet[2972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:00.343416 kubelet[2972]: I0307 01:15:00.343341 2972 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:00.847150 kubelet[2972]: I0307 01:15:00.847092 2972 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:15:00.847150 kubelet[2972]: I0307 01:15:00.847144 2972 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:00.847539 kubelet[2972]: I0307 01:15:00.847511 2972 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:00.898860 kubelet[2972]: I0307 01:15:00.898821 2972 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:00.899750 kubelet[2972]: E0307 01:15:00.899689 2972 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:00.909631 kubelet[2972]: E0307 01:15:00.909586 2972 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:00.909631 kubelet[2972]: I0307 01:15:00.909628 2972 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:00.913967 kubelet[2972]: I0307 01:15:00.913936 2972 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:15:00.920161 kubelet[2972]: I0307 01:15:00.920073 2972 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:00.923925 kubelet[2972]: I0307 01:15:00.920151 2972 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:15:00.923925 kubelet[2972]: I0307 01:15:00.923932 2972 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:00.924177 kubelet[2972]: I0307 01:15:00.923951 2972 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:15:00.924177 kubelet[2972]: I0307 01:15:00.924131 2972 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:00.931032 kubelet[2972]: I0307 01:15:00.930969 2972 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:15:00.931032 kubelet[2972]: I0307 01:15:00.931024 2972 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:00.931319 kubelet[2972]: I0307 01:15:00.931060 2972 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:15:00.934753 kubelet[2972]: I0307 01:15:00.933392 2972 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:00.943022 kubelet[2972]: E0307 01:15:00.942894 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-11&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:00.943403 kubelet[2972]: E0307 01:15:00.943371 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:00.943529 kubelet[2972]: I0307 01:15:00.943513 2972 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:00.944316 kubelet[2972]: I0307 01:15:00.944295 2972 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:00.945503 kubelet[2972]: W0307 01:15:00.945480 2972 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:15:00.960385 kubelet[2972]: I0307 01:15:00.960351 2972 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:15:00.960560 kubelet[2972]: I0307 01:15:00.960410 2972 server.go:1289] "Started kubelet" Mar 7 01:15:00.960719 kubelet[2972]: I0307 01:15:00.960659 2972 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:00.962118 kubelet[2972]: I0307 01:15:00.961713 2972 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:15:00.964683 kubelet[2972]: I0307 01:15:00.963958 2972 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:00.964683 kubelet[2972]: I0307 01:15:00.964383 2972 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:00.966108 kubelet[2972]: E0307 01:15:00.964534 2972 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.11:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-11.189a6a22c92de130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-11,UID:ip-172-31-16-11,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-11,},FirstTimestamp:2026-03-07 01:15:00.960370992 +0000 UTC m=+0.667342101,LastTimestamp:2026-03-07 01:15:00.960370992 +0000 UTC m=+0.667342101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-11,}" Mar 7 01:15:00.970588 kubelet[2972]: I0307 01:15:00.968960 2972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:00.971860 kubelet[2972]: I0307 01:15:00.971037 2972 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:15:00.971860 kubelet[2972]: I0307 01:15:00.969091 2972 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:00.971860 kubelet[2972]: I0307 01:15:00.971308 2972 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:15:00.971860 kubelet[2972]: I0307 01:15:00.971353 2972 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:15:00.973853 kubelet[2972]: E0307 01:15:00.972233 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:00.973853 kubelet[2972]: E0307 01:15:00.972278 2972 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-11\" not found" Mar 7 01:15:00.973853 kubelet[2972]: E0307 01:15:00.972721 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="200ms" Mar 7 01:15:00.975210 kubelet[2972]: I0307 01:15:00.975190 2972 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:00.975403 kubelet[2972]: I0307 01:15:00.975385 2972 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:00.978478 kubelet[2972]: I0307 01:15:00.978456 2972 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:00.982490 kubelet[2972]: I0307 01:15:00.982441 2972 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:00.997781 kubelet[2972]: E0307 01:15:00.997261 2972 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:01.008882 kubelet[2972]: I0307 01:15:01.008856 2972 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:01.008882 kubelet[2972]: I0307 01:15:01.008874 2972 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:01.009078 kubelet[2972]: I0307 01:15:01.008895 2972 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:01.015717 kubelet[2972]: I0307 01:15:01.013762 2972 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:01.015717 kubelet[2972]: I0307 01:15:01.013858 2972 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:15:01.015717 kubelet[2972]: I0307 01:15:01.014450 2972 policy_none.go:49] "None policy: Start" Mar 7 01:15:01.015717 kubelet[2972]: I0307 01:15:01.014470 2972 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:15:01.015717 kubelet[2972]: I0307 01:15:01.014485 2972 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:15:01.015717 kubelet[2972]: I0307 01:15:01.015720 2972 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:01.016024 kubelet[2972]: I0307 01:15:01.015737 2972 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:15:01.016024 kubelet[2972]: E0307 01:15:01.015784 2972 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:01.016397 kubelet[2972]: E0307 01:15:01.016332 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:01.024560 kubelet[2972]: E0307 01:15:01.024520 2972 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:01.028720 kubelet[2972]: I0307 01:15:01.027386 2972 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:01.028720 kubelet[2972]: I0307 01:15:01.027414 2972 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:01.028720 kubelet[2972]: I0307 01:15:01.028350 2972 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:01.030833 kubelet[2972]: E0307 01:15:01.030807 2972 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:01.030958 kubelet[2972]: E0307 01:15:01.030868 2972 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-11\" not found" Mar 7 01:15:01.126164 kubelet[2972]: E0307 01:15:01.125851 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:01.132006 kubelet[2972]: E0307 01:15:01.131972 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:01.132285 kubelet[2972]: I0307 01:15:01.132260 2972 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:01.132720 kubelet[2972]: E0307 01:15:01.132661 2972 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Mar 7 01:15:01.143077 kubelet[2972]: E0307 01:15:01.143028 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:01.183525 kubelet[2972]: E0307 01:15:01.182444 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="400ms" Mar 7 01:15:01.274118 kubelet[2972]: I0307 01:15:01.273940 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c07c4e180feeecd1e9ef3c165f75916-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-11\" (UID: \"1c07c4e180feeecd1e9ef3c165f75916\") " pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:01.274314 kubelet[2972]: I0307 01:15:01.274128 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:01.274314 kubelet[2972]: I0307 01:15:01.274163 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:01.274314 kubelet[2972]: I0307 01:15:01.274190 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea89c88556f375bec5958a4e2f6d3008-ca-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"ea89c88556f375bec5958a4e2f6d3008\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:01.274314 kubelet[2972]: I0307 01:15:01.274214 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea89c88556f375bec5958a4e2f6d3008-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"ea89c88556f375bec5958a4e2f6d3008\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:01.274314 kubelet[2972]: I0307 01:15:01.274248 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea89c88556f375bec5958a4e2f6d3008-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"ea89c88556f375bec5958a4e2f6d3008\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:01.274877 kubelet[2972]: I0307 01:15:01.274275 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:01.274877 kubelet[2972]: I0307 01:15:01.274312 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:01.274877 kubelet[2972]: I0307 01:15:01.274340 2972 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:01.338607 kubelet[2972]: I0307 01:15:01.338558 2972 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:01.339259 kubelet[2972]: E0307 01:15:01.339209 2972 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Mar 7 01:15:01.427944 containerd[2090]: time="2026-03-07T01:15:01.427592848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-11,Uid:1c07c4e180feeecd1e9ef3c165f75916,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:01.440405 containerd[2090]: time="2026-03-07T01:15:01.440345225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-11,Uid:ea89c88556f375bec5958a4e2f6d3008,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:01.444836 containerd[2090]: time="2026-03-07T01:15:01.444791012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-11,Uid:c307a9ed1fcd1207cdd9382c63c71399,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:01.584217 kubelet[2972]: E0307 01:15:01.584165 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="800ms" Mar 7 01:15:01.771952 kubelet[2972]: I0307 01:15:01.768277 2972 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:01.771952 kubelet[2972]: E0307 01:15:01.771866 2972 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Mar 7 01:15:01.997447 kubelet[2972]: E0307 01:15:01.997286 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:02.195825 kubelet[2972]: E0307 01:15:02.192515 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-11&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:02.195825 kubelet[2972]: E0307 01:15:02.192605 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:02.258924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194985777.mount: Deactivated successfully. Mar 7 01:15:02.281422 containerd[2090]: time="2026-03-07T01:15:02.281292512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:02.284067 containerd[2090]: time="2026-03-07T01:15:02.284003631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:15:02.287074 containerd[2090]: time="2026-03-07T01:15:02.287012183Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:02.293490 containerd[2090]: time="2026-03-07T01:15:02.293072704Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:02.302112 containerd[2090]: time="2026-03-07T01:15:02.297306485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:02.310170 containerd[2090]: time="2026-03-07T01:15:02.305352633Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:02.328614 containerd[2090]: time="2026-03-07T01:15:02.324070429Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 896.37499ms" Mar 7 01:15:02.345995 containerd[2090]: time="2026-03-07T01:15:02.345939600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:02.366668 containerd[2090]: time="2026-03-07T01:15:02.366604829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 926.147275ms" Mar 7 01:15:02.379200 containerd[2090]: time="2026-03-07T01:15:02.379135682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:02.388966 kubelet[2972]: E0307 01:15:02.388358 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="1.6s" Mar 7 01:15:02.395131 containerd[2090]: time="2026-03-07T01:15:02.394333395Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 949.455614ms" Mar 7 01:15:02.470788 kubelet[2972]: E0307 01:15:02.467434 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:02.533309 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:15:02.592718 kubelet[2972]: I0307 01:15:02.592611 2972 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:02.597248 kubelet[2972]: E0307 01:15:02.597189 2972 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Mar 7 01:15:02.989301 kubelet[2972]: E0307 01:15:02.989223 2972 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:02.989844 containerd[2090]: time="2026-03-07T01:15:02.989722691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:02.989844 containerd[2090]: time="2026-03-07T01:15:02.989814101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:02.993605 containerd[2090]: time="2026-03-07T01:15:02.989837794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:02.995223 containerd[2090]: time="2026-03-07T01:15:02.993797037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:03.003363 containerd[2090]: time="2026-03-07T01:15:03.002960578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:03.003363 containerd[2090]: time="2026-03-07T01:15:03.003030719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:03.003363 containerd[2090]: time="2026-03-07T01:15:03.003066764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:03.003363 containerd[2090]: time="2026-03-07T01:15:03.003179043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:03.033021 containerd[2090]: time="2026-03-07T01:15:03.028328197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:03.033021 containerd[2090]: time="2026-03-07T01:15:03.028403332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:03.033021 containerd[2090]: time="2026-03-07T01:15:03.028787952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:03.039714 containerd[2090]: time="2026-03-07T01:15:03.038025031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:03.474079 containerd[2090]: time="2026-03-07T01:15:03.474038629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-11,Uid:ea89c88556f375bec5958a4e2f6d3008,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1e9d24b8293b7d547f519ceea61a61e1cee3c98e055e9120ee2da6797a7a459\"" Mar 7 01:15:03.497749 containerd[2090]: time="2026-03-07T01:15:03.496477578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-11,Uid:c307a9ed1fcd1207cdd9382c63c71399,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1ee218d999d7a76fd373fb04e602dbd7701757517310834ae918bd8559375d\"" Mar 7 01:15:03.502658 containerd[2090]: time="2026-03-07T01:15:03.502612583Z" level=info msg="CreateContainer within sandbox \"e1e9d24b8293b7d547f519ceea61a61e1cee3c98e055e9120ee2da6797a7a459\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:15:03.513988 containerd[2090]: time="2026-03-07T01:15:03.513945669Z" level=info msg="CreateContainer within sandbox \"4d1ee218d999d7a76fd373fb04e602dbd7701757517310834ae918bd8559375d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:15:03.516953 containerd[2090]: time="2026-03-07T01:15:03.516910369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-11,Uid:1c07c4e180feeecd1e9ef3c165f75916,Namespace:kube-system,Attempt:0,} returns sandbox id \"570c08d5f080173c9bf433e3ebf75055a45e25d11b11f22017670bacdddfc599\"" Mar 7 01:15:03.527280 containerd[2090]: time="2026-03-07T01:15:03.527240843Z" level=info msg="CreateContainer within sandbox \"570c08d5f080173c9bf433e3ebf75055a45e25d11b11f22017670bacdddfc599\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:15:03.570869 containerd[2090]: time="2026-03-07T01:15:03.570783711Z" level=info msg="CreateContainer within sandbox \"4d1ee218d999d7a76fd373fb04e602dbd7701757517310834ae918bd8559375d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543\"" Mar 7 01:15:03.572987 containerd[2090]: time="2026-03-07T01:15:03.572950368Z" level=info msg="StartContainer for \"046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543\"" Mar 7 01:15:03.606120 containerd[2090]: time="2026-03-07T01:15:03.605915256Z" level=info msg="CreateContainer within sandbox \"e1e9d24b8293b7d547f519ceea61a61e1cee3c98e055e9120ee2da6797a7a459\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41d41cbc3e5eb0825d857c9fecdab70e36b2530773e131e9d274c073657e3b4b\"" Mar 7 01:15:03.606791 containerd[2090]: time="2026-03-07T01:15:03.606652241Z" level=info msg="StartContainer for \"41d41cbc3e5eb0825d857c9fecdab70e36b2530773e131e9d274c073657e3b4b\"" Mar 7 01:15:03.620459 containerd[2090]: time="2026-03-07T01:15:03.618086110Z" level=info msg="CreateContainer within sandbox \"570c08d5f080173c9bf433e3ebf75055a45e25d11b11f22017670bacdddfc599\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d\"" Mar 7 01:15:03.628849 containerd[2090]: time="2026-03-07T01:15:03.628677051Z" level=info msg="StartContainer for \"dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d\"" Mar 7 01:15:03.750745 kubelet[2972]: E0307 01:15:03.748761 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:03.821742 containerd[2090]: time="2026-03-07T01:15:03.821003174Z" level=info msg="StartContainer for \"046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543\" returns successfully" Mar 7 01:15:03.846877 containerd[2090]: time="2026-03-07T01:15:03.846830230Z" level=info msg="StartContainer for \"41d41cbc3e5eb0825d857c9fecdab70e36b2530773e131e9d274c073657e3b4b\" returns successfully" Mar 7 01:15:03.869397 kubelet[2972]: E0307 01:15:03.869259 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:03.873065 containerd[2090]: time="2026-03-07T01:15:03.873020572Z" level=info msg="StartContainer for \"dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d\" returns successfully" Mar 7 01:15:03.989535 kubelet[2972]: E0307 01:15:03.989492 2972 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": dial tcp 172.31.16.11:6443: connect: connection refused" interval="3.2s" Mar 7 01:15:04.034359 kubelet[2972]: E0307 01:15:04.033005 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:04.037120 kubelet[2972]: E0307 01:15:04.037091 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:04.043885 kubelet[2972]: E0307 01:15:04.043636 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:04.058999 kubelet[2972]: E0307 01:15:04.058946 2972 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-11&limit=500&resourceVersion=0\": dial tcp 172.31.16.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:04.202786 kubelet[2972]: I0307 01:15:04.202605 2972 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:04.204217 kubelet[2972]: E0307 01:15:04.204170 2972 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.11:6443/api/v1/nodes\": dial tcp 172.31.16.11:6443: connect: connection refused" node="ip-172-31-16-11" Mar 7 01:15:05.045721 kubelet[2972]: E0307 01:15:05.045512 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:05.049151 kubelet[2972]: E0307 01:15:05.049117 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:06.046740 kubelet[2972]: E0307 01:15:06.046421 2972 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:07.409819 kubelet[2972]: I0307 01:15:07.408941 2972 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:07.668104 kubelet[2972]: E0307 01:15:07.666657 2972 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-11\" not found" node="ip-172-31-16-11" Mar 7 01:15:07.668104 kubelet[2972]: I0307 01:15:07.667757 2972 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-11" Mar 7 01:15:07.672963 kubelet[2972]: I0307 01:15:07.672649 2972 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:07.691030 kubelet[2972]: E0307 01:15:07.690909 2972 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-11.189a6a22c92de130 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-11,UID:ip-172-31-16-11,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-11,},FirstTimestamp:2026-03-07 01:15:00.960370992 +0000 UTC m=+0.667342101,LastTimestamp:2026-03-07 01:15:00.960370992 +0000 UTC m=+0.667342101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-11,}" Mar 7 01:15:07.697867 kubelet[2972]: E0307 01:15:07.697825 2972 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:07.697867 kubelet[2972]: I0307 01:15:07.697869 2972 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:07.701406 kubelet[2972]: E0307 01:15:07.701373 2972 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:07.702988 kubelet[2972]: I0307 01:15:07.702748 2972 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:07.705181 kubelet[2972]: E0307 01:15:07.705156 2972 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:07.942245 kubelet[2972]: I0307 01:15:07.941678 2972 apiserver.go:52] "Watching apiserver" Mar 7 01:15:07.971910 kubelet[2972]: I0307 01:15:07.971816 2972 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:15:08.507527 kubelet[2972]: I0307 01:15:08.507487 2972 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:09.383144 kubelet[2972]: I0307 01:15:09.382881 2972 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:09.763396 systemd[1]: Reloading requested from client PID 3255 ('systemctl') (unit session-7.scope)... Mar 7 01:15:09.763419 systemd[1]: Reloading... Mar 7 01:15:09.890747 zram_generator::config[3298]: No configuration found. Mar 7 01:15:10.022124 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:10.117972 systemd[1]: Reloading finished in 353 ms. Mar 7 01:15:10.157782 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:10.174549 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:15:10.175105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:10.186458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:10.429033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:10.429567 (kubelet)[3365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:10.515151 kubelet[3365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:10.515151 kubelet[3365]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:10.515151 kubelet[3365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:10.515659 kubelet[3365]: I0307 01:15:10.515220 3365 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:10.521196 kubelet[3365]: I0307 01:15:10.521166 3365 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:15:10.521775 kubelet[3365]: I0307 01:15:10.521509 3365 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:10.525171 kubelet[3365]: I0307 01:15:10.525133 3365 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:10.526635 kubelet[3365]: I0307 01:15:10.526596 3365 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:15:10.532732 kubelet[3365]: I0307 01:15:10.532545 3365 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:10.544498 kubelet[3365]: E0307 01:15:10.544444 3365 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:10.544498 kubelet[3365]: I0307 01:15:10.544495 3365 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:10.548259 kubelet[3365]: I0307 01:15:10.548230 3365 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:15:10.548991 kubelet[3365]: I0307 01:15:10.548951 3365 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:10.549216 kubelet[3365]: I0307 01:15:10.548995 3365 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 01:15:10.550329 kubelet[3365]: I0307 01:15:10.550298 3365 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:10.550417 kubelet[3365]: I0307 01:15:10.550339 3365 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:15:10.553864 kubelet[3365]: I0307 01:15:10.553824 3365 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:10.554122 kubelet[3365]: I0307 01:15:10.554099 3365 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:15:10.554122 kubelet[3365]: I0307 01:15:10.554122 3365 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:10.554288 kubelet[3365]: I0307 01:15:10.554266 3365 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:15:10.554331 kubelet[3365]: I0307 01:15:10.554292 3365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:10.558730 kubelet[3365]: I0307 01:15:10.558000 3365 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:10.558961 kubelet[3365]: I0307 01:15:10.558943 3365 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:10.563895 kubelet[3365]: I0307 01:15:10.563856 3365 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:15:10.564070 kubelet[3365]: I0307 01:15:10.564060 3365 server.go:1289] "Started kubelet" Mar 7 01:15:10.568309 kubelet[3365]: I0307 01:15:10.567879 3365 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:10.570761 kubelet[3365]: I0307 01:15:10.569363 3365 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:15:10.571944 kubelet[3365]: I0307 01:15:10.571047 3365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:10.571944 kubelet[3365]: I0307 01:15:10.571387 3365 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:10.596797 kubelet[3365]: I0307 01:15:10.596562 3365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:10.610670 kubelet[3365]: I0307 01:15:10.609294 3365 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:10.610670 kubelet[3365]: I0307 01:15:10.610015 3365 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:15:10.610670 kubelet[3365]: I0307 01:15:10.610565 3365 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:15:10.611011 kubelet[3365]: I0307 01:15:10.610761 3365 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:15:10.619604 kubelet[3365]: I0307 01:15:10.619576 3365 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:10.620250 kubelet[3365]: I0307 01:15:10.619895 3365 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:10.625708 kubelet[3365]: I0307 01:15:10.624992 3365 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:10.627256 kubelet[3365]: I0307 01:15:10.626508 3365 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:10.627256 kubelet[3365]: I0307 01:15:10.626530 3365 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:15:10.627256 kubelet[3365]: I0307 01:15:10.626550 3365 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:10.627256 kubelet[3365]: I0307 01:15:10.626561 3365 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:15:10.627256 kubelet[3365]: E0307 01:15:10.626606 3365 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:10.632502 kubelet[3365]: I0307 01:15:10.629851 3365 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:10.658426 kubelet[3365]: E0307 01:15:10.658386 3365 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:10.715930 kubelet[3365]: I0307 01:15:10.715813 3365 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:10.715930 kubelet[3365]: I0307 01:15:10.715832 3365 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:10.715930 kubelet[3365]: I0307 01:15:10.715856 3365 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:10.716155 kubelet[3365]: I0307 01:15:10.716019 3365 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:15:10.716155 kubelet[3365]: I0307 01:15:10.716032 3365 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:15:10.716155 kubelet[3365]: I0307 01:15:10.716065 3365 policy_none.go:49] "None policy: Start" Mar 7 01:15:10.716155 kubelet[3365]: I0307 01:15:10.716081 3365 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:15:10.716155 kubelet[3365]: I0307 01:15:10.716094 3365 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:15:10.716338 kubelet[3365]: I0307 01:15:10.716222 3365 state_mem.go:75] "Updated machine memory state" Mar 7 01:15:10.718028 kubelet[3365]: E0307 01:15:10.717986 3365 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:10.718221 kubelet[3365]: I0307 01:15:10.718202 3365 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:10.718283 kubelet[3365]: I0307 01:15:10.718227 3365 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:10.720329 kubelet[3365]: I0307 01:15:10.720177 3365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:10.723538 kubelet[3365]: E0307 01:15:10.723348 3365 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:10.727253 kubelet[3365]: I0307 01:15:10.727223 3365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:10.728287 kubelet[3365]: I0307 01:15:10.727894 3365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:10.731987 kubelet[3365]: I0307 01:15:10.731959 3365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.745458 kubelet[3365]: E0307 01:15:10.744971 3365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-11\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:10.747087 kubelet[3365]: E0307 01:15:10.745493 3365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-11\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.782615 sudo[3400]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:15:10.783217 sudo[3400]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:15:10.815473 kubelet[3365]: I0307 01:15:10.815393 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c07c4e180feeecd1e9ef3c165f75916-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-11\" (UID: \"1c07c4e180feeecd1e9ef3c165f75916\") " pod="kube-system/kube-scheduler-ip-172-31-16-11" Mar 7 01:15:10.815842 kubelet[3365]: I0307 01:15:10.815648 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea89c88556f375bec5958a4e2f6d3008-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"ea89c88556f375bec5958a4e2f6d3008\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:10.815842 kubelet[3365]: I0307 01:15:10.815723 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.815842 kubelet[3365]: I0307 01:15:10.815748 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.815842 kubelet[3365]: I0307 01:15:10.815793 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.815842 kubelet[3365]: I0307 01:15:10.815816 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.817990 kubelet[3365]: I0307 01:15:10.817958 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea89c88556f375bec5958a4e2f6d3008-ca-certs\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"ea89c88556f375bec5958a4e2f6d3008\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:10.818077 kubelet[3365]: I0307 01:15:10.818010 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea89c88556f375bec5958a4e2f6d3008-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-11\" (UID: \"ea89c88556f375bec5958a4e2f6d3008\") " pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:10.818077 kubelet[3365]: I0307 01:15:10.818038 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c307a9ed1fcd1207cdd9382c63c71399-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-11\" (UID: \"c307a9ed1fcd1207cdd9382c63c71399\") " pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:10.832157 kubelet[3365]: I0307 01:15:10.831464 3365 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-11" Mar 7 01:15:10.849884 kubelet[3365]: I0307 01:15:10.849851 3365 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-11" Mar 7 01:15:10.850022 kubelet[3365]: I0307 01:15:10.849939 3365 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-11" Mar 7 01:15:11.486664 sudo[3400]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:11.567187 kubelet[3365]: I0307 01:15:11.567149 3365 apiserver.go:52] "Watching apiserver" Mar 7 01:15:11.611381 kubelet[3365]: I0307 01:15:11.611318 3365 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:15:11.678676 kubelet[3365]: I0307 01:15:11.677541 3365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:11.678676 kubelet[3365]: I0307 01:15:11.677863 3365 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:11.692758 kubelet[3365]: E0307 01:15:11.692555 3365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-11\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-11" Mar 7 01:15:11.701335 kubelet[3365]: E0307 01:15:11.699291 3365 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-11\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-11" Mar 7 01:15:11.733409 kubelet[3365]: I0307 01:15:11.733164 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-11" podStartSLOduration=1.732964853 podStartE2EDuration="1.732964853s" podCreationTimestamp="2026-03-07 01:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:11.728271364 +0000 UTC m=+1.285475447" watchObservedRunningTime="2026-03-07 01:15:11.732964853 +0000 UTC m=+1.290168923" Mar 7 01:15:11.774887 kubelet[3365]: I0307 01:15:11.773673 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-11" podStartSLOduration=3.773648722 podStartE2EDuration="3.773648722s" podCreationTimestamp="2026-03-07 01:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:11.755642371 +0000 UTC m=+1.312846453" watchObservedRunningTime="2026-03-07 01:15:11.773648722 +0000 UTC m=+1.330852800" Mar 7 01:15:11.794135 kubelet[3365]: I0307 01:15:11.793891 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-11" podStartSLOduration=2.793869209 podStartE2EDuration="2.793869209s" podCreationTimestamp="2026-03-07 01:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:11.772668348 +0000 UTC m=+1.329872430" watchObservedRunningTime="2026-03-07 01:15:11.793869209 +0000 UTC m=+1.351073294" Mar 7 01:15:13.249526 sudo[2442]: pam_unix(sudo:session): session closed for user root Mar 7 01:15:13.327375 sshd[2438]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:13.331829 systemd[1]: sshd@6-172.31.16.11:22-68.220.241.50:58334.service: Deactivated successfully. Mar 7 01:15:13.338435 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:15:13.340376 systemd-logind[2063]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:15:13.342262 systemd-logind[2063]: Removed session 7. Mar 7 01:15:15.641354 kubelet[3365]: I0307 01:15:15.640816 3365 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:15:15.641829 containerd[2090]: time="2026-03-07T01:15:15.641183049Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:15:15.642242 kubelet[3365]: I0307 01:15:15.642217 3365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:15:16.658819 kubelet[3365]: I0307 01:15:16.657725 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-xtables-lock\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.658819 kubelet[3365]: I0307 01:15:16.657783 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1630e2be-1a3e-415f-aee6-77cd4b2bdaf3-xtables-lock\") pod \"kube-proxy-29t9x\" (UID: \"1630e2be-1a3e-415f-aee6-77cd4b2bdaf3\") " pod="kube-system/kube-proxy-29t9x" Mar 7 01:15:16.658819 kubelet[3365]: I0307 01:15:16.657807 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-bpf-maps\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.658819 kubelet[3365]: I0307 01:15:16.657829 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-hostproc\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.658819 kubelet[3365]: I0307 01:15:16.657850 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cni-path\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.658819 kubelet[3365]: I0307 01:15:16.657872 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/821dff98-0f36-4134-be33-26593ebc63dd-clustermesh-secrets\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660036 kubelet[3365]: I0307 01:15:16.657896 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1630e2be-1a3e-415f-aee6-77cd4b2bdaf3-lib-modules\") pod \"kube-proxy-29t9x\" (UID: \"1630e2be-1a3e-415f-aee6-77cd4b2bdaf3\") " pod="kube-system/kube-proxy-29t9x" Mar 7 01:15:16.660036 kubelet[3365]: I0307 01:15:16.657919 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-kernel\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660036 kubelet[3365]: I0307 01:15:16.657941 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-run\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660036 kubelet[3365]: I0307 01:15:16.657972 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-etc-cni-netd\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660036 kubelet[3365]: I0307 01:15:16.657996 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/821dff98-0f36-4134-be33-26593ebc63dd-cilium-config-path\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660234 kubelet[3365]: I0307 01:15:16.658018 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-net\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660234 kubelet[3365]: I0307 01:15:16.658039 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-hubble-tls\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660234 kubelet[3365]: I0307 01:15:16.658059 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcxv\" (UniqueName: \"kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-kube-api-access-tlcxv\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660234 kubelet[3365]: I0307 01:15:16.658083 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1630e2be-1a3e-415f-aee6-77cd4b2bdaf3-kube-proxy\") pod \"kube-proxy-29t9x\" (UID: \"1630e2be-1a3e-415f-aee6-77cd4b2bdaf3\") " pod="kube-system/kube-proxy-29t9x" Mar 7 01:15:16.660234 kubelet[3365]: I0307 01:15:16.658109 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f57b\" (UniqueName: \"kubernetes.io/projected/1630e2be-1a3e-415f-aee6-77cd4b2bdaf3-kube-api-access-4f57b\") pod \"kube-proxy-29t9x\" (UID: \"1630e2be-1a3e-415f-aee6-77cd4b2bdaf3\") " pod="kube-system/kube-proxy-29t9x" Mar 7 01:15:16.660431 kubelet[3365]: I0307 01:15:16.658134 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-cgroup\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.660431 kubelet[3365]: I0307 01:15:16.658159 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-lib-modules\") pod \"cilium-w7gt4\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " pod="kube-system/cilium-w7gt4" Mar 7 01:15:16.758552 kubelet[3365]: I0307 01:15:16.758486 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a64b2cf-5465-4dc8-822c-0f042c886154-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s45mw\" (UID: \"5a64b2cf-5465-4dc8-822c-0f042c886154\") " pod="kube-system/cilium-operator-6c4d7847fc-s45mw" Mar 7 01:15:16.758757 kubelet[3365]: I0307 01:15:16.758642 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5gz4\" (UniqueName: \"kubernetes.io/projected/5a64b2cf-5465-4dc8-822c-0f042c886154-kube-api-access-l5gz4\") pod \"cilium-operator-6c4d7847fc-s45mw\" (UID: \"5a64b2cf-5465-4dc8-822c-0f042c886154\") " pod="kube-system/cilium-operator-6c4d7847fc-s45mw" Mar 7 01:15:16.924561 containerd[2090]: time="2026-03-07T01:15:16.924510741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29t9x,Uid:1630e2be-1a3e-415f-aee6-77cd4b2bdaf3,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:16.951157 containerd[2090]: time="2026-03-07T01:15:16.950503159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7gt4,Uid:821dff98-0f36-4134-be33-26593ebc63dd,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:16.968943 containerd[2090]: time="2026-03-07T01:15:16.968652675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:16.969148 containerd[2090]: time="2026-03-07T01:15:16.968968912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:16.969148 containerd[2090]: time="2026-03-07T01:15:16.969000081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:16.969402 containerd[2090]: time="2026-03-07T01:15:16.969231994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:17.006729 containerd[2090]: time="2026-03-07T01:15:17.006457112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:17.006729 containerd[2090]: time="2026-03-07T01:15:17.006687721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:17.006924 containerd[2090]: time="2026-03-07T01:15:17.006781571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:17.008350 containerd[2090]: time="2026-03-07T01:15:17.007807619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:17.022958 containerd[2090]: time="2026-03-07T01:15:17.022059529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s45mw,Uid:5a64b2cf-5465-4dc8-822c-0f042c886154,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:17.028314 containerd[2090]: time="2026-03-07T01:15:17.028275596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-29t9x,Uid:1630e2be-1a3e-415f-aee6-77cd4b2bdaf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c873ffc1565ebcbe4d8209e932dc2e0a9f6e1ec1cd5f09e13f81c742d858e7c8\"" Mar 7 01:15:17.049385 containerd[2090]: time="2026-03-07T01:15:17.049169998Z" level=info msg="CreateContainer within sandbox \"c873ffc1565ebcbe4d8209e932dc2e0a9f6e1ec1cd5f09e13f81c742d858e7c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:15:17.087860 containerd[2090]: time="2026-03-07T01:15:17.087829758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w7gt4,Uid:821dff98-0f36-4134-be33-26593ebc63dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\"" Mar 7 01:15:17.089596 containerd[2090]: time="2026-03-07T01:15:17.089547313Z" level=info msg="CreateContainer within sandbox \"c873ffc1565ebcbe4d8209e932dc2e0a9f6e1ec1cd5f09e13f81c742d858e7c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f975857b8d1e5f7093f3d0dfce1eb957b722e4aafcbe53f8f83dceddc0b97740\"" Mar 7 01:15:17.091502 containerd[2090]: time="2026-03-07T01:15:17.090455229Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:15:17.091502 containerd[2090]: time="2026-03-07T01:15:17.090808644Z" level=info msg="StartContainer for \"f975857b8d1e5f7093f3d0dfce1eb957b722e4aafcbe53f8f83dceddc0b97740\"" Mar 7 01:15:17.107127 containerd[2090]: time="2026-03-07T01:15:17.106770691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:17.107127 containerd[2090]: time="2026-03-07T01:15:17.106874416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:17.107127 containerd[2090]: time="2026-03-07T01:15:17.106899951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:17.107127 containerd[2090]: time="2026-03-07T01:15:17.107053400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:17.132724 update_engine[2064]: I20260307 01:15:17.130160 2064 update_attempter.cc:509] Updating boot flags... Mar 7 01:15:17.205675 containerd[2090]: time="2026-03-07T01:15:17.205553956Z" level=info msg="StartContainer for \"f975857b8d1e5f7093f3d0dfce1eb957b722e4aafcbe53f8f83dceddc0b97740\" returns successfully" Mar 7 01:15:17.224093 containerd[2090]: time="2026-03-07T01:15:17.223790251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s45mw,Uid:5a64b2cf-5465-4dc8-822c-0f042c886154,Namespace:kube-system,Attempt:0,} returns sandbox id \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\"" Mar 7 01:15:17.278818 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3613) Mar 7 01:15:17.492750 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3616) Mar 7 01:15:17.728829 kubelet[3365]: I0307 01:15:17.728774 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-29t9x" podStartSLOduration=1.7287525000000001 podStartE2EDuration="1.7287525s" podCreationTimestamp="2026-03-07 01:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:17.728597854 +0000 UTC m=+7.285801937" watchObservedRunningTime="2026-03-07 01:15:17.7287525 +0000 UTC m=+7.285956582" Mar 7 01:15:22.074236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2933513341.mount: Deactivated successfully. Mar 7 01:15:24.726526 containerd[2090]: time="2026-03-07T01:15:24.726309096Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:24.728720 containerd[2090]: time="2026-03-07T01:15:24.728075602Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:15:24.730486 containerd[2090]: time="2026-03-07T01:15:24.730102318Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:24.732343 containerd[2090]: time="2026-03-07T01:15:24.732297128Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.641796667s" Mar 7 01:15:24.732482 containerd[2090]: time="2026-03-07T01:15:24.732345696Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:15:24.734257 containerd[2090]: time="2026-03-07T01:15:24.733855318Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:15:24.740718 containerd[2090]: time="2026-03-07T01:15:24.740659331Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:15:24.813463 containerd[2090]: time="2026-03-07T01:15:24.813389741Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\"" Mar 7 01:15:24.815714 containerd[2090]: time="2026-03-07T01:15:24.815622086Z" level=info msg="StartContainer for \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\"" Mar 7 01:15:25.040983 containerd[2090]: time="2026-03-07T01:15:25.039783769Z" level=info msg="StartContainer for \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\" returns successfully" Mar 7 01:15:25.104559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2-rootfs.mount: Deactivated successfully. Mar 7 01:15:25.288900 containerd[2090]: time="2026-03-07T01:15:25.255986511Z" level=info msg="shim disconnected" id=65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2 namespace=k8s.io Mar 7 01:15:25.288900 containerd[2090]: time="2026-03-07T01:15:25.288892943Z" level=warning msg="cleaning up after shim disconnected" id=65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2 namespace=k8s.io Mar 7 01:15:25.289197 containerd[2090]: time="2026-03-07T01:15:25.288914177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:25.912206 containerd[2090]: time="2026-03-07T01:15:25.912161591Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:15:25.961553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4285216954.mount: Deactivated successfully. Mar 7 01:15:26.044940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797665920.mount: Deactivated successfully. Mar 7 01:15:26.053914 containerd[2090]: time="2026-03-07T01:15:26.053829566Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\"" Mar 7 01:15:26.058322 containerd[2090]: time="2026-03-07T01:15:26.056851423Z" level=info msg="StartContainer for \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\"" Mar 7 01:15:26.155416 containerd[2090]: time="2026-03-07T01:15:26.155244866Z" level=info msg="StartContainer for \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\" returns successfully" Mar 7 01:15:26.174961 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:15:26.175386 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:26.175470 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:26.186430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:15:26.232045 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:15:26.274033 containerd[2090]: time="2026-03-07T01:15:26.273746246Z" level=info msg="shim disconnected" id=9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2 namespace=k8s.io Mar 7 01:15:26.274033 containerd[2090]: time="2026-03-07T01:15:26.273801647Z" level=warning msg="cleaning up after shim disconnected" id=9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2 namespace=k8s.io Mar 7 01:15:26.274033 containerd[2090]: time="2026-03-07T01:15:26.273814557Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:26.782092 containerd[2090]: time="2026-03-07T01:15:26.782038499Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:26.783993 containerd[2090]: time="2026-03-07T01:15:26.783825933Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:15:26.786291 containerd[2090]: time="2026-03-07T01:15:26.785997315Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:15:26.787763 containerd[2090]: time="2026-03-07T01:15:26.787529307Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.053627879s" Mar 7 01:15:26.787763 containerd[2090]: time="2026-03-07T01:15:26.787577515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:15:26.794165 containerd[2090]: time="2026-03-07T01:15:26.794117132Z" level=info msg="CreateContainer within sandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:15:26.815364 containerd[2090]: time="2026-03-07T01:15:26.815308231Z" level=info msg="CreateContainer within sandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\"" Mar 7 01:15:26.816404 containerd[2090]: time="2026-03-07T01:15:26.816325146Z" level=info msg="StartContainer for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\"" Mar 7 01:15:26.892624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2-rootfs.mount: Deactivated successfully. Mar 7 01:15:26.903081 containerd[2090]: time="2026-03-07T01:15:26.903031243Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:15:26.904481 containerd[2090]: time="2026-03-07T01:15:26.904368512Z" level=info msg="StartContainer for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" returns successfully" Mar 7 01:15:26.956553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1043608869.mount: Deactivated successfully. Mar 7 01:15:26.966923 containerd[2090]: time="2026-03-07T01:15:26.966851111Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\"" Mar 7 01:15:26.970017 containerd[2090]: time="2026-03-07T01:15:26.969974754Z" level=info msg="StartContainer for \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\"" Mar 7 01:15:27.112556 containerd[2090]: time="2026-03-07T01:15:27.112429808Z" level=info msg="StartContainer for \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\" returns successfully" Mar 7 01:15:27.177333 containerd[2090]: time="2026-03-07T01:15:27.177156031Z" level=info msg="shim disconnected" id=424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37 namespace=k8s.io Mar 7 01:15:27.177333 containerd[2090]: time="2026-03-07T01:15:27.177311566Z" level=warning msg="cleaning up after shim disconnected" id=424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37 namespace=k8s.io Mar 7 01:15:27.177333 containerd[2090]: time="2026-03-07T01:15:27.177327916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:27.204768 containerd[2090]: time="2026-03-07T01:15:27.204683645Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:15:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:15:27.893588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37-rootfs.mount: Deactivated successfully. Mar 7 01:15:27.914747 containerd[2090]: time="2026-03-07T01:15:27.912502309Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:15:27.946305 containerd[2090]: time="2026-03-07T01:15:27.946258340Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\"" Mar 7 01:15:27.948724 containerd[2090]: time="2026-03-07T01:15:27.947887355Z" level=info msg="StartContainer for \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\"" Mar 7 01:15:27.993954 kubelet[3365]: I0307 01:15:27.990729 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s45mw" podStartSLOduration=2.412352181 podStartE2EDuration="11.968498699s" podCreationTimestamp="2026-03-07 01:15:16 +0000 UTC" firstStartedPulling="2026-03-07 01:15:17.232559124 +0000 UTC m=+6.789763202" lastFinishedPulling="2026-03-07 01:15:26.788705648 +0000 UTC m=+16.345909720" observedRunningTime="2026-03-07 01:15:27.968226243 +0000 UTC m=+17.525430325" watchObservedRunningTime="2026-03-07 01:15:27.968498699 +0000 UTC m=+17.525702780" Mar 7 01:15:28.079782 systemd[1]: run-containerd-runc-k8s.io-ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4-runc.77roUk.mount: Deactivated successfully. Mar 7 01:15:28.168173 containerd[2090]: time="2026-03-07T01:15:28.167735319Z" level=info msg="StartContainer for \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\" returns successfully" Mar 7 01:15:28.201604 containerd[2090]: time="2026-03-07T01:15:28.201525390Z" level=info msg="shim disconnected" id=ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4 namespace=k8s.io Mar 7 01:15:28.201604 containerd[2090]: time="2026-03-07T01:15:28.201593753Z" level=warning msg="cleaning up after shim disconnected" id=ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4 namespace=k8s.io Mar 7 01:15:28.201604 containerd[2090]: time="2026-03-07T01:15:28.201605220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:15:28.889517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4-rootfs.mount: Deactivated successfully. Mar 7 01:15:28.904669 containerd[2090]: time="2026-03-07T01:15:28.904616564Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:15:28.938535 containerd[2090]: time="2026-03-07T01:15:28.937852583Z" level=info msg="CreateContainer within sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\"" Mar 7 01:15:28.940937 containerd[2090]: time="2026-03-07T01:15:28.939950850Z" level=info msg="StartContainer for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\"" Mar 7 01:15:29.016940 containerd[2090]: time="2026-03-07T01:15:29.016884544Z" level=info msg="StartContainer for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" returns successfully" Mar 7 01:15:29.209816 kubelet[3365]: I0307 01:15:29.209785 3365 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:15:29.362913 kubelet[3365]: I0307 01:15:29.362862 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a4833a6-a5ff-4b6a-af8d-1acf4b474501-config-volume\") pod \"coredns-674b8bbfcf-9zxgx\" (UID: \"9a4833a6-a5ff-4b6a-af8d-1acf4b474501\") " pod="kube-system/coredns-674b8bbfcf-9zxgx" Mar 7 01:15:29.363523 kubelet[3365]: I0307 01:15:29.363014 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f6e645c-be48-4eec-a064-d9c37be9531b-config-volume\") pod \"coredns-674b8bbfcf-x4dfm\" (UID: \"5f6e645c-be48-4eec-a064-d9c37be9531b\") " pod="kube-system/coredns-674b8bbfcf-x4dfm" Mar 7 01:15:29.363523 kubelet[3365]: I0307 01:15:29.363067 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkrhj\" (UniqueName: \"kubernetes.io/projected/9a4833a6-a5ff-4b6a-af8d-1acf4b474501-kube-api-access-lkrhj\") pod \"coredns-674b8bbfcf-9zxgx\" (UID: \"9a4833a6-a5ff-4b6a-af8d-1acf4b474501\") " pod="kube-system/coredns-674b8bbfcf-9zxgx" Mar 7 01:15:29.363523 kubelet[3365]: I0307 01:15:29.363091 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8258\" (UniqueName: \"kubernetes.io/projected/5f6e645c-be48-4eec-a064-d9c37be9531b-kube-api-access-s8258\") pod \"coredns-674b8bbfcf-x4dfm\" (UID: \"5f6e645c-be48-4eec-a064-d9c37be9531b\") " pod="kube-system/coredns-674b8bbfcf-x4dfm" Mar 7 01:15:29.561673 containerd[2090]: time="2026-03-07T01:15:29.561014403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9zxgx,Uid:9a4833a6-a5ff-4b6a-af8d-1acf4b474501,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:29.573384 containerd[2090]: time="2026-03-07T01:15:29.572961569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x4dfm,Uid:5f6e645c-be48-4eec-a064-d9c37be9531b,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:29.942730 kubelet[3365]: I0307 01:15:29.942636 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w7gt4" podStartSLOduration=6.298567832 podStartE2EDuration="13.942612932s" podCreationTimestamp="2026-03-07 01:15:16 +0000 UTC" firstStartedPulling="2026-03-07 01:15:17.089613099 +0000 UTC m=+6.646817173" lastFinishedPulling="2026-03-07 01:15:24.733658192 +0000 UTC m=+14.290862273" observedRunningTime="2026-03-07 01:15:29.942461633 +0000 UTC m=+19.499665714" watchObservedRunningTime="2026-03-07 01:15:29.942612932 +0000 UTC m=+19.499817012" Mar 7 01:15:31.514666 (udev-worker)[4387]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:15:31.516491 (udev-worker)[4346]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:15:31.519533 systemd-networkd[1654]: cilium_host: Link UP Mar 7 01:15:31.522107 systemd-networkd[1654]: cilium_net: Link UP Mar 7 01:15:31.522113 systemd-networkd[1654]: cilium_net: Gained carrier Mar 7 01:15:31.522399 systemd-networkd[1654]: cilium_host: Gained carrier Mar 7 01:15:31.522718 systemd-networkd[1654]: cilium_host: Gained IPv6LL Mar 7 01:15:31.646322 systemd-networkd[1654]: cilium_vxlan: Link UP Mar 7 01:15:31.646336 systemd-networkd[1654]: cilium_vxlan: Gained carrier Mar 7 01:15:32.133722 kernel: NET: Registered PF_ALG protocol family Mar 7 01:15:32.442817 systemd-networkd[1654]: cilium_net: Gained IPv6LL Mar 7 01:15:32.923288 systemd-networkd[1654]: lxc_health: Link UP Mar 7 01:15:32.927267 (udev-worker)[4400]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:15:32.940280 systemd-networkd[1654]: lxc_health: Gained carrier Mar 7 01:15:33.263079 systemd-networkd[1654]: lxcc5220bbb3102: Link UP Mar 7 01:15:33.272793 kernel: eth0: renamed from tmp7afac Mar 7 01:15:33.280887 systemd-networkd[1654]: lxcc5220bbb3102: Gained carrier Mar 7 01:15:33.529828 systemd-networkd[1654]: cilium_vxlan: Gained IPv6LL Mar 7 01:15:33.732100 systemd-networkd[1654]: lxcc3c7379f731e: Link UP Mar 7 01:15:33.740661 kernel: eth0: renamed from tmp7c939 Mar 7 01:15:33.751990 systemd-networkd[1654]: lxcc3c7379f731e: Gained carrier Mar 7 01:15:34.232995 systemd-networkd[1654]: lxc_health: Gained IPv6LL Mar 7 01:15:34.873122 systemd-networkd[1654]: lxcc3c7379f731e: Gained IPv6LL Mar 7 01:15:35.001867 systemd-networkd[1654]: lxcc5220bbb3102: Gained IPv6LL Mar 7 01:15:37.400016 ntpd[2052]: Listen normally on 6 cilium_host 192.168.0.225:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 6 cilium_host 192.168.0.225:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 7 cilium_net [fe80::3096:82ff:fe7f:35d2%4]:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 8 cilium_host [fe80::70:e0ff:fe84:d47e%5]:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 9 cilium_vxlan [fe80::839:acff:fe50:492a%6]:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 10 lxc_health [fe80::7c98:83ff:fe77:8427%8]:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 11 lxcc5220bbb3102 [fe80::40aa:c1ff:fe37:8071%10]:123 Mar 7 01:15:37.400927 ntpd[2052]: 7 Mar 01:15:37 ntpd[2052]: Listen normally on 12 lxcc3c7379f731e [fe80::f0ad:d0ff:fefd:b505%12]:123 Mar 7 01:15:37.400116 ntpd[2052]: Listen normally on 7 cilium_net [fe80::3096:82ff:fe7f:35d2%4]:123 Mar 7 01:15:37.400195 ntpd[2052]: Listen normally on 8 cilium_host [fe80::70:e0ff:fe84:d47e%5]:123 Mar 7 01:15:37.400242 ntpd[2052]: Listen normally on 9 cilium_vxlan [fe80::839:acff:fe50:492a%6]:123 Mar 7 01:15:37.400282 ntpd[2052]: Listen normally on 10 lxc_health [fe80::7c98:83ff:fe77:8427%8]:123 Mar 7 01:15:37.400319 ntpd[2052]: Listen normally on 11 lxcc5220bbb3102 [fe80::40aa:c1ff:fe37:8071%10]:123 Mar 7 01:15:37.400356 ntpd[2052]: Listen normally on 12 lxcc3c7379f731e [fe80::f0ad:d0ff:fefd:b505%12]:123 Mar 7 01:15:37.952151 containerd[2090]: time="2026-03-07T01:15:37.950892639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:37.952151 containerd[2090]: time="2026-03-07T01:15:37.950980545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:37.952151 containerd[2090]: time="2026-03-07T01:15:37.951038100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:37.952151 containerd[2090]: time="2026-03-07T01:15:37.951266775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:37.968734 containerd[2090]: time="2026-03-07T01:15:37.966293547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:37.968734 containerd[2090]: time="2026-03-07T01:15:37.966476514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:37.968734 containerd[2090]: time="2026-03-07T01:15:37.966505868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:37.968734 containerd[2090]: time="2026-03-07T01:15:37.966646139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:38.022901 systemd[1]: run-containerd-runc-k8s.io-7afac8b3c19b3a1af8a71bb8a3a88e3a0f3ad67c2b3d47e0a46fa67156ca61ac-runc.ATZfTX.mount: Deactivated successfully. Mar 7 01:15:38.173956 containerd[2090]: time="2026-03-07T01:15:38.173917109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9zxgx,Uid:9a4833a6-a5ff-4b6a-af8d-1acf4b474501,Namespace:kube-system,Attempt:0,} returns sandbox id \"7afac8b3c19b3a1af8a71bb8a3a88e3a0f3ad67c2b3d47e0a46fa67156ca61ac\"" Mar 7 01:15:38.185288 containerd[2090]: time="2026-03-07T01:15:38.185242026Z" level=info msg="CreateContainer within sandbox \"7afac8b3c19b3a1af8a71bb8a3a88e3a0f3ad67c2b3d47e0a46fa67156ca61ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:15:38.207867 containerd[2090]: time="2026-03-07T01:15:38.205034256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x4dfm,Uid:5f6e645c-be48-4eec-a064-d9c37be9531b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c939d15f08f10354585b59328ff2678480d40ac36bb2c6b3843ff10342c53d1\"" Mar 7 01:15:38.216569 containerd[2090]: time="2026-03-07T01:15:38.216524365Z" level=info msg="CreateContainer within sandbox \"7c939d15f08f10354585b59328ff2678480d40ac36bb2c6b3843ff10342c53d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:15:38.246112 containerd[2090]: time="2026-03-07T01:15:38.246067731Z" level=info msg="CreateContainer within sandbox \"7c939d15f08f10354585b59328ff2678480d40ac36bb2c6b3843ff10342c53d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"726a8465dfbffb297626bb50d4de3007100a0fbb44e4ffcb9debed84b9686d73\"" Mar 7 01:15:38.246870 containerd[2090]: time="2026-03-07T01:15:38.246669026Z" level=info msg="StartContainer for \"726a8465dfbffb297626bb50d4de3007100a0fbb44e4ffcb9debed84b9686d73\"" Mar 7 01:15:38.248609 containerd[2090]: time="2026-03-07T01:15:38.248570281Z" level=info msg="CreateContainer within sandbox \"7afac8b3c19b3a1af8a71bb8a3a88e3a0f3ad67c2b3d47e0a46fa67156ca61ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"595c48ee6aef4958ba1c8035c992f3355ee24ddd1a37ecc3f18eecd62ec0e582\"" Mar 7 01:15:38.251248 containerd[2090]: time="2026-03-07T01:15:38.249728171Z" level=info msg="StartContainer for \"595c48ee6aef4958ba1c8035c992f3355ee24ddd1a37ecc3f18eecd62ec0e582\"" Mar 7 01:15:38.335884 containerd[2090]: time="2026-03-07T01:15:38.335831175Z" level=info msg="StartContainer for \"726a8465dfbffb297626bb50d4de3007100a0fbb44e4ffcb9debed84b9686d73\" returns successfully" Mar 7 01:15:38.335884 containerd[2090]: time="2026-03-07T01:15:38.335831180Z" level=info msg="StartContainer for \"595c48ee6aef4958ba1c8035c992f3355ee24ddd1a37ecc3f18eecd62ec0e582\" returns successfully" Mar 7 01:15:38.961663 kubelet[3365]: I0307 01:15:38.959133 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x4dfm" podStartSLOduration=22.959110556 podStartE2EDuration="22.959110556s" podCreationTimestamp="2026-03-07 01:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:38.956343022 +0000 UTC m=+28.513547102" watchObservedRunningTime="2026-03-07 01:15:38.959110556 +0000 UTC m=+28.516314627" Mar 7 01:15:38.988571 kubelet[3365]: I0307 01:15:38.987109 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9zxgx" podStartSLOduration=22.987089496 podStartE2EDuration="22.987089496s" podCreationTimestamp="2026-03-07 01:15:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:15:38.983283969 +0000 UTC m=+28.540488049" watchObservedRunningTime="2026-03-07 01:15:38.987089496 +0000 UTC m=+28.544293578" Mar 7 01:15:41.715723 kubelet[3365]: I0307 01:15:41.702495 3365 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 01:15:48.635811 systemd[1]: Started sshd@7-172.31.16.11:22-68.220.241.50:34614.service - OpenSSH per-connection server daemon (68.220.241.50:34614). Mar 7 01:15:49.153469 sshd[4930]: Accepted publickey for core from 68.220.241.50 port 34614 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:49.155720 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:49.173526 systemd-logind[2063]: New session 8 of user core. Mar 7 01:15:49.181177 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:15:50.196825 sshd[4930]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:50.204446 systemd[1]: sshd@7-172.31.16.11:22-68.220.241.50:34614.service: Deactivated successfully. Mar 7 01:15:50.206423 systemd-logind[2063]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:15:50.210619 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:15:50.212002 systemd-logind[2063]: Removed session 8. Mar 7 01:15:55.279974 systemd[1]: Started sshd@8-172.31.16.11:22-68.220.241.50:44830.service - OpenSSH per-connection server daemon (68.220.241.50:44830). Mar 7 01:15:55.773069 sshd[4944]: Accepted publickey for core from 68.220.241.50 port 44830 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:15:55.774828 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:15:55.780195 systemd-logind[2063]: New session 9 of user core. Mar 7 01:15:55.785095 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:15:56.197952 sshd[4944]: pam_unix(sshd:session): session closed for user core Mar 7 01:15:56.203406 systemd[1]: sshd@8-172.31.16.11:22-68.220.241.50:44830.service: Deactivated successfully. Mar 7 01:15:56.211928 systemd-logind[2063]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:15:56.212208 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:15:56.214247 systemd-logind[2063]: Removed session 9. Mar 7 01:16:01.286924 systemd[1]: Started sshd@9-172.31.16.11:22-68.220.241.50:44842.service - OpenSSH per-connection server daemon (68.220.241.50:44842). Mar 7 01:16:01.786513 sshd[4959]: Accepted publickey for core from 68.220.241.50 port 44842 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:01.787358 sshd[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:01.801326 systemd-logind[2063]: New session 10 of user core. Mar 7 01:16:01.810758 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:16:02.220257 sshd[4959]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:02.228278 systemd[1]: sshd@9-172.31.16.11:22-68.220.241.50:44842.service: Deactivated successfully. Mar 7 01:16:02.233052 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:16:02.234130 systemd-logind[2063]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:16:02.235495 systemd-logind[2063]: Removed session 10. Mar 7 01:16:07.305578 systemd[1]: Started sshd@10-172.31.16.11:22-68.220.241.50:36740.service - OpenSSH per-connection server daemon (68.220.241.50:36740). Mar 7 01:16:07.815730 sshd[4974]: Accepted publickey for core from 68.220.241.50 port 36740 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:07.816925 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:07.822527 systemd-logind[2063]: New session 11 of user core. Mar 7 01:16:07.828126 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:16:08.245381 sshd[4974]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:08.250274 systemd[1]: sshd@10-172.31.16.11:22-68.220.241.50:36740.service: Deactivated successfully. Mar 7 01:16:08.255493 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:16:08.256456 systemd-logind[2063]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:16:08.257677 systemd-logind[2063]: Removed session 11. Mar 7 01:16:08.331083 systemd[1]: Started sshd@11-172.31.16.11:22-68.220.241.50:36744.service - OpenSSH per-connection server daemon (68.220.241.50:36744). Mar 7 01:16:08.813869 sshd[4989]: Accepted publickey for core from 68.220.241.50 port 36744 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:08.815468 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:08.820389 systemd-logind[2063]: New session 12 of user core. Mar 7 01:16:08.827050 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:16:09.289266 sshd[4989]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:09.294752 systemd[1]: sshd@11-172.31.16.11:22-68.220.241.50:36744.service: Deactivated successfully. Mar 7 01:16:09.298568 systemd-logind[2063]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:16:09.298956 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:16:09.301925 systemd-logind[2063]: Removed session 12. Mar 7 01:16:09.375104 systemd[1]: Started sshd@12-172.31.16.11:22-68.220.241.50:36758.service - OpenSSH per-connection server daemon (68.220.241.50:36758). Mar 7 01:16:09.867762 sshd[5001]: Accepted publickey for core from 68.220.241.50 port 36758 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:09.869507 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:09.874894 systemd-logind[2063]: New session 13 of user core. Mar 7 01:16:09.883073 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:16:10.298440 sshd[5001]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:10.302556 systemd[1]: sshd@12-172.31.16.11:22-68.220.241.50:36758.service: Deactivated successfully. Mar 7 01:16:10.308916 systemd-logind[2063]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:16:10.310360 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:16:10.311688 systemd-logind[2063]: Removed session 13. Mar 7 01:16:15.382111 systemd[1]: Started sshd@13-172.31.16.11:22-68.220.241.50:34116.service - OpenSSH per-connection server daemon (68.220.241.50:34116). Mar 7 01:16:15.870440 sshd[5017]: Accepted publickey for core from 68.220.241.50 port 34116 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:15.872217 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:15.877951 systemd-logind[2063]: New session 14 of user core. Mar 7 01:16:15.884075 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:16:16.289724 sshd[5017]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:16.294743 systemd-logind[2063]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:16:16.297024 systemd[1]: sshd@13-172.31.16.11:22-68.220.241.50:34116.service: Deactivated successfully. Mar 7 01:16:16.301902 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:16:16.302624 systemd-logind[2063]: Removed session 14. Mar 7 01:16:21.376724 systemd[1]: Started sshd@14-172.31.16.11:22-68.220.241.50:34124.service - OpenSSH per-connection server daemon (68.220.241.50:34124). Mar 7 01:16:21.866238 sshd[5033]: Accepted publickey for core from 68.220.241.50 port 34124 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:21.867877 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:21.873463 systemd-logind[2063]: New session 15 of user core. Mar 7 01:16:21.878035 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:16:22.287653 sshd[5033]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:22.291435 systemd[1]: sshd@14-172.31.16.11:22-68.220.241.50:34124.service: Deactivated successfully. Mar 7 01:16:22.296661 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:16:22.298123 systemd-logind[2063]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:16:22.300090 systemd-logind[2063]: Removed session 15. Mar 7 01:16:22.371094 systemd[1]: Started sshd@15-172.31.16.11:22-68.220.241.50:44740.service - OpenSSH per-connection server daemon (68.220.241.50:44740). Mar 7 01:16:22.869741 sshd[5047]: Accepted publickey for core from 68.220.241.50 port 44740 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:22.871209 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:22.876925 systemd-logind[2063]: New session 16 of user core. Mar 7 01:16:22.879133 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:16:23.731212 sshd[5047]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:23.738647 systemd[1]: sshd@15-172.31.16.11:22-68.220.241.50:44740.service: Deactivated successfully. Mar 7 01:16:23.744164 systemd-logind[2063]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:16:23.744965 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:16:23.747662 systemd-logind[2063]: Removed session 16. Mar 7 01:16:23.814190 systemd[1]: Started sshd@16-172.31.16.11:22-68.220.241.50:44750.service - OpenSSH per-connection server daemon (68.220.241.50:44750). Mar 7 01:16:24.312558 sshd[5059]: Accepted publickey for core from 68.220.241.50 port 44750 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:24.313365 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:24.319169 systemd-logind[2063]: New session 17 of user core. Mar 7 01:16:24.323071 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:16:25.325360 sshd[5059]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:25.331107 systemd[1]: sshd@16-172.31.16.11:22-68.220.241.50:44750.service: Deactivated successfully. Mar 7 01:16:25.336117 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:16:25.337090 systemd-logind[2063]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:16:25.338407 systemd-logind[2063]: Removed session 17. Mar 7 01:16:25.410075 systemd[1]: Started sshd@17-172.31.16.11:22-68.220.241.50:44762.service - OpenSSH per-connection server daemon (68.220.241.50:44762). Mar 7 01:16:25.902449 sshd[5078]: Accepted publickey for core from 68.220.241.50 port 44762 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:25.904115 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:25.909686 systemd-logind[2063]: New session 18 of user core. Mar 7 01:16:25.913062 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:16:26.480102 sshd[5078]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:26.484552 systemd[1]: sshd@17-172.31.16.11:22-68.220.241.50:44762.service: Deactivated successfully. Mar 7 01:16:26.490596 systemd-logind[2063]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:16:26.492358 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:16:26.493802 systemd-logind[2063]: Removed session 18. Mar 7 01:16:26.566084 systemd[1]: Started sshd@18-172.31.16.11:22-68.220.241.50:44778.service - OpenSSH per-connection server daemon (68.220.241.50:44778). Mar 7 01:16:27.056973 sshd[5090]: Accepted publickey for core from 68.220.241.50 port 44778 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:27.058736 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:27.063323 systemd-logind[2063]: New session 19 of user core. Mar 7 01:16:27.068043 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:16:27.479232 sshd[5090]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:27.482977 systemd[1]: sshd@18-172.31.16.11:22-68.220.241.50:44778.service: Deactivated successfully. Mar 7 01:16:27.488985 systemd-logind[2063]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:16:27.489935 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:16:27.491761 systemd-logind[2063]: Removed session 19. Mar 7 01:16:32.563074 systemd[1]: Started sshd@19-172.31.16.11:22-68.220.241.50:53842.service - OpenSSH per-connection server daemon (68.220.241.50:53842). Mar 7 01:16:33.052729 sshd[5106]: Accepted publickey for core from 68.220.241.50 port 53842 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:33.054493 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:33.061451 systemd-logind[2063]: New session 20 of user core. Mar 7 01:16:33.066301 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:16:33.462832 sshd[5106]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:33.466488 systemd-logind[2063]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:16:33.467994 systemd[1]: sshd@19-172.31.16.11:22-68.220.241.50:53842.service: Deactivated successfully. Mar 7 01:16:33.471997 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:16:33.474492 systemd-logind[2063]: Removed session 20. Mar 7 01:16:38.546049 systemd[1]: Started sshd@20-172.31.16.11:22-68.220.241.50:53846.service - OpenSSH per-connection server daemon (68.220.241.50:53846). Mar 7 01:16:39.043597 sshd[5121]: Accepted publickey for core from 68.220.241.50 port 53846 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:39.045368 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:39.050076 systemd-logind[2063]: New session 21 of user core. Mar 7 01:16:39.059315 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:16:39.479872 sshd[5121]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:39.483258 systemd[1]: sshd@20-172.31.16.11:22-68.220.241.50:53846.service: Deactivated successfully. Mar 7 01:16:39.488656 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:16:39.490456 systemd-logind[2063]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:16:39.491923 systemd-logind[2063]: Removed session 21. Mar 7 01:16:39.563677 systemd[1]: Started sshd@21-172.31.16.11:22-68.220.241.50:53862.service - OpenSSH per-connection server daemon (68.220.241.50:53862). Mar 7 01:16:40.067410 sshd[5135]: Accepted publickey for core from 68.220.241.50 port 53862 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:40.070595 sshd[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:40.076542 systemd-logind[2063]: New session 22 of user core. Mar 7 01:16:40.081065 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:16:41.684838 containerd[2090]: time="2026-03-07T01:16:41.683798658Z" level=info msg="StopContainer for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" with timeout 30 (s)" Mar 7 01:16:41.684838 containerd[2090]: time="2026-03-07T01:16:41.684429758Z" level=info msg="Stop container \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" with signal terminated" Mar 7 01:16:41.750194 systemd[1]: run-containerd-runc-k8s.io-bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c-runc.CVspHG.mount: Deactivated successfully. Mar 7 01:16:41.783397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5-rootfs.mount: Deactivated successfully. Mar 7 01:16:41.788485 containerd[2090]: time="2026-03-07T01:16:41.786878748Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:16:41.794926 containerd[2090]: time="2026-03-07T01:16:41.794884449Z" level=info msg="StopContainer for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" with timeout 2 (s)" Mar 7 01:16:41.795972 containerd[2090]: time="2026-03-07T01:16:41.795236483Z" level=info msg="Stop container \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" with signal terminated" Mar 7 01:16:41.799062 containerd[2090]: time="2026-03-07T01:16:41.798791407Z" level=info msg="shim disconnected" id=13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5 namespace=k8s.io Mar 7 01:16:41.799062 containerd[2090]: time="2026-03-07T01:16:41.798868159Z" level=warning msg="cleaning up after shim disconnected" id=13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5 namespace=k8s.io Mar 7 01:16:41.799062 containerd[2090]: time="2026-03-07T01:16:41.798886549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:41.809889 systemd-networkd[1654]: lxc_health: Link DOWN Mar 7 01:16:41.809898 systemd-networkd[1654]: lxc_health: Lost carrier Mar 7 01:16:41.846489 containerd[2090]: time="2026-03-07T01:16:41.846178777Z" level=info msg="StopContainer for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" returns successfully" Mar 7 01:16:41.852475 containerd[2090]: time="2026-03-07T01:16:41.852420959Z" level=info msg="StopPodSandbox for \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\"" Mar 7 01:16:41.852475 containerd[2090]: time="2026-03-07T01:16:41.852468757Z" level=info msg="Container to stop \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:16:41.859458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b-shm.mount: Deactivated successfully. Mar 7 01:16:41.867557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c-rootfs.mount: Deactivated successfully. Mar 7 01:16:41.884785 containerd[2090]: time="2026-03-07T01:16:41.884727457Z" level=info msg="shim disconnected" id=bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c namespace=k8s.io Mar 7 01:16:41.884785 containerd[2090]: time="2026-03-07T01:16:41.884784315Z" level=warning msg="cleaning up after shim disconnected" id=bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c namespace=k8s.io Mar 7 01:16:41.885081 containerd[2090]: time="2026-03-07T01:16:41.884795867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:41.912319 containerd[2090]: time="2026-03-07T01:16:41.912270612Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:16:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:16:41.919060 containerd[2090]: time="2026-03-07T01:16:41.919016965Z" level=info msg="StopContainer for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" returns successfully" Mar 7 01:16:41.920125 containerd[2090]: time="2026-03-07T01:16:41.919867686Z" level=info msg="shim disconnected" id=b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b namespace=k8s.io Mar 7 01:16:41.920125 containerd[2090]: time="2026-03-07T01:16:41.919921633Z" level=warning msg="cleaning up after shim disconnected" id=b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b namespace=k8s.io Mar 7 01:16:41.920125 containerd[2090]: time="2026-03-07T01:16:41.919934582Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:41.920400 containerd[2090]: time="2026-03-07T01:16:41.920367806Z" level=info msg="StopPodSandbox for \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\"" Mar 7 01:16:41.920465 containerd[2090]: time="2026-03-07T01:16:41.920409570Z" level=info msg="Container to stop \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:16:41.920465 containerd[2090]: time="2026-03-07T01:16:41.920428310Z" level=info msg="Container to stop \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:16:41.920465 containerd[2090]: time="2026-03-07T01:16:41.920444497Z" level=info msg="Container to stop \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:16:41.920465 containerd[2090]: time="2026-03-07T01:16:41.920458838Z" level=info msg="Container to stop \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:16:41.920638 containerd[2090]: time="2026-03-07T01:16:41.920472901Z" level=info msg="Container to stop \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:16:41.947460 containerd[2090]: time="2026-03-07T01:16:41.947191352Z" level=info msg="TearDown network for sandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" successfully" Mar 7 01:16:41.947460 containerd[2090]: time="2026-03-07T01:16:41.947241050Z" level=info msg="StopPodSandbox for \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" returns successfully" Mar 7 01:16:41.987716 containerd[2090]: time="2026-03-07T01:16:41.984556332Z" level=info msg="shim disconnected" id=49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22 namespace=k8s.io Mar 7 01:16:41.987716 containerd[2090]: time="2026-03-07T01:16:41.985737553Z" level=warning msg="cleaning up after shim disconnected" id=49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22 namespace=k8s.io Mar 7 01:16:41.987716 containerd[2090]: time="2026-03-07T01:16:41.985767623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:42.054485 containerd[2090]: time="2026-03-07T01:16:42.053471644Z" level=info msg="TearDown network for sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" successfully" Mar 7 01:16:42.054485 containerd[2090]: time="2026-03-07T01:16:42.054348536Z" level=info msg="StopPodSandbox for \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" returns successfully" Mar 7 01:16:42.087973 kubelet[3365]: I0307 01:16:42.087748 3365 scope.go:117] "RemoveContainer" containerID="13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5" Mar 7 01:16:42.094493 containerd[2090]: time="2026-03-07T01:16:42.093337642Z" level=info msg="RemoveContainer for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\"" Mar 7 01:16:42.105405 containerd[2090]: time="2026-03-07T01:16:42.105210726Z" level=info msg="RemoveContainer for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" returns successfully" Mar 7 01:16:42.106798 kubelet[3365]: I0307 01:16:42.106716 3365 scope.go:117] "RemoveContainer" containerID="13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5" Mar 7 01:16:42.108748 kubelet[3365]: I0307 01:16:42.107924 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a64b2cf-5465-4dc8-822c-0f042c886154-cilium-config-path\") pod \"5a64b2cf-5465-4dc8-822c-0f042c886154\" (UID: \"5a64b2cf-5465-4dc8-822c-0f042c886154\") " Mar 7 01:16:42.108748 kubelet[3365]: I0307 01:16:42.107999 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5gz4\" (UniqueName: \"kubernetes.io/projected/5a64b2cf-5465-4dc8-822c-0f042c886154-kube-api-access-l5gz4\") pod \"5a64b2cf-5465-4dc8-822c-0f042c886154\" (UID: \"5a64b2cf-5465-4dc8-822c-0f042c886154\") " Mar 7 01:16:42.139267 containerd[2090]: time="2026-03-07T01:16:42.115560890Z" level=error msg="ContainerStatus for \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\": not found" Mar 7 01:16:42.140994 kubelet[3365]: I0307 01:16:42.139242 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a64b2cf-5465-4dc8-822c-0f042c886154-kube-api-access-l5gz4" (OuterVolumeSpecName: "kube-api-access-l5gz4") pod "5a64b2cf-5465-4dc8-822c-0f042c886154" (UID: "5a64b2cf-5465-4dc8-822c-0f042c886154"). InnerVolumeSpecName "kube-api-access-l5gz4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:16:42.145063 kubelet[3365]: I0307 01:16:42.139083 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a64b2cf-5465-4dc8-822c-0f042c886154-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a64b2cf-5465-4dc8-822c-0f042c886154" (UID: "5a64b2cf-5465-4dc8-822c-0f042c886154"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:16:42.152226 kubelet[3365]: E0307 01:16:42.152152 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\": not found" containerID="13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5" Mar 7 01:16:42.169460 kubelet[3365]: I0307 01:16:42.152234 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5"} err="failed to get container status \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\": rpc error: code = NotFound desc = an error occurred when try to find container \"13438b5be2921063dbbe305252c681abfd22b71914e3a91f8fc356a76d951da5\": not found" Mar 7 01:16:42.169460 kubelet[3365]: I0307 01:16:42.169439 3365 scope.go:117] "RemoveContainer" containerID="bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c" Mar 7 01:16:42.171026 containerd[2090]: time="2026-03-07T01:16:42.170989466Z" level=info msg="RemoveContainer for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\"" Mar 7 01:16:42.176426 containerd[2090]: time="2026-03-07T01:16:42.176382632Z" level=info msg="RemoveContainer for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" returns successfully" Mar 7 01:16:42.176721 kubelet[3365]: I0307 01:16:42.176677 3365 scope.go:117] "RemoveContainer" containerID="ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4" Mar 7 01:16:42.178303 containerd[2090]: time="2026-03-07T01:16:42.178267549Z" level=info msg="RemoveContainer for \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\"" Mar 7 01:16:42.183673 containerd[2090]: time="2026-03-07T01:16:42.183631460Z" level=info msg="RemoveContainer for \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\" returns successfully" Mar 7 01:16:42.183975 kubelet[3365]: I0307 01:16:42.183939 3365 scope.go:117] "RemoveContainer" containerID="424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37" Mar 7 01:16:42.185259 containerd[2090]: time="2026-03-07T01:16:42.185063863Z" level=info msg="RemoveContainer for \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\"" Mar 7 01:16:42.191173 containerd[2090]: time="2026-03-07T01:16:42.191086103Z" level=info msg="RemoveContainer for \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\" returns successfully" Mar 7 01:16:42.191471 kubelet[3365]: I0307 01:16:42.191434 3365 scope.go:117] "RemoveContainer" containerID="9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2" Mar 7 01:16:42.192810 containerd[2090]: time="2026-03-07T01:16:42.192771462Z" level=info msg="RemoveContainer for \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\"" Mar 7 01:16:42.198725 containerd[2090]: time="2026-03-07T01:16:42.198593259Z" level=info msg="RemoveContainer for \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\" returns successfully" Mar 7 01:16:42.200057 kubelet[3365]: I0307 01:16:42.200031 3365 scope.go:117] "RemoveContainer" containerID="65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2" Mar 7 01:16:42.202610 containerd[2090]: time="2026-03-07T01:16:42.202545735Z" level=info msg="RemoveContainer for \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\"" Mar 7 01:16:42.208594 containerd[2090]: time="2026-03-07T01:16:42.208483277Z" level=info msg="RemoveContainer for \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\" returns successfully" Mar 7 01:16:42.209016 kubelet[3365]: I0307 01:16:42.208534 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/821dff98-0f36-4134-be33-26593ebc63dd-clustermesh-secrets\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209495 kubelet[3365]: I0307 01:16:42.209468 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-run\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209638 kubelet[3365]: I0307 01:16:42.209618 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-etc-cni-netd\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209895 kubelet[3365]: I0307 01:16:42.209655 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-lib-modules\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209895 kubelet[3365]: I0307 01:16:42.209705 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/821dff98-0f36-4134-be33-26593ebc63dd-cilium-config-path\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209895 kubelet[3365]: I0307 01:16:42.209736 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-hubble-tls\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209895 kubelet[3365]: I0307 01:16:42.209758 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcxv\" (UniqueName: \"kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-kube-api-access-tlcxv\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209895 kubelet[3365]: I0307 01:16:42.209778 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-xtables-lock\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.209895 kubelet[3365]: I0307 01:16:42.209801 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-hostproc\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.210188 kubelet[3365]: I0307 01:16:42.209822 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-cgroup\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.210188 kubelet[3365]: I0307 01:16:42.209843 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cni-path\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.210188 kubelet[3365]: I0307 01:16:42.209862 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-bpf-maps\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.210188 kubelet[3365]: I0307 01:16:42.209884 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-kernel\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.210188 kubelet[3365]: I0307 01:16:42.209907 3365 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-net\") pod \"821dff98-0f36-4134-be33-26593ebc63dd\" (UID: \"821dff98-0f36-4134-be33-26593ebc63dd\") " Mar 7 01:16:42.210188 kubelet[3365]: I0307 01:16:42.209967 3365 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a64b2cf-5465-4dc8-822c-0f042c886154-cilium-config-path\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.210433 kubelet[3365]: I0307 01:16:42.209985 3365 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l5gz4\" (UniqueName: \"kubernetes.io/projected/5a64b2cf-5465-4dc8-822c-0f042c886154-kube-api-access-l5gz4\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.210433 kubelet[3365]: I0307 01:16:42.210031 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.210518 kubelet[3365]: I0307 01:16:42.210466 3365 scope.go:117] "RemoveContainer" containerID="bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c" Mar 7 01:16:42.211777 containerd[2090]: time="2026-03-07T01:16:42.211136630Z" level=error msg="ContainerStatus for \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\": not found" Mar 7 01:16:42.212274 kubelet[3365]: I0307 01:16:42.212226 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.212274 kubelet[3365]: I0307 01:16:42.212287 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-hostproc" (OuterVolumeSpecName: "hostproc") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.212274 kubelet[3365]: I0307 01:16:42.212311 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.212274 kubelet[3365]: I0307 01:16:42.212347 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cni-path" (OuterVolumeSpecName: "cni-path") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.212274 kubelet[3365]: I0307 01:16:42.212369 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.216021 kubelet[3365]: I0307 01:16:42.212389 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.216021 kubelet[3365]: E0307 01:16:42.212524 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\": not found" containerID="bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c" Mar 7 01:16:42.216021 kubelet[3365]: I0307 01:16:42.212555 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c"} err="failed to get container status \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcda42f9f3efcd05067deb60111321ae10a959380be31d6dbe651ee83893000c\": not found" Mar 7 01:16:42.216021 kubelet[3365]: I0307 01:16:42.212599 3365 scope.go:117] "RemoveContainer" containerID="ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4" Mar 7 01:16:42.216021 kubelet[3365]: E0307 01:16:42.213709 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\": not found" containerID="ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4" Mar 7 01:16:42.216240 containerd[2090]: time="2026-03-07T01:16:42.213000974Z" level=error msg="ContainerStatus for \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\": not found" Mar 7 01:16:42.216240 containerd[2090]: time="2026-03-07T01:16:42.214014266Z" level=error msg="ContainerStatus for \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\": not found" Mar 7 01:16:42.216339 kubelet[3365]: I0307 01:16:42.213744 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4"} err="failed to get container status \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff4092f796e77a5951ae8bb0adcb090184daf5d2c0fa904d223c9497e2ff78d4\": not found" Mar 7 01:16:42.216339 kubelet[3365]: I0307 01:16:42.213770 3365 scope.go:117] "RemoveContainer" containerID="424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37" Mar 7 01:16:42.216339 kubelet[3365]: I0307 01:16:42.213880 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.216339 kubelet[3365]: I0307 01:16:42.213911 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.216339 kubelet[3365]: I0307 01:16:42.213930 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:16:42.216554 kubelet[3365]: E0307 01:16:42.215902 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\": not found" containerID="424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37" Mar 7 01:16:42.216554 kubelet[3365]: I0307 01:16:42.215935 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37"} err="failed to get container status \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\": rpc error: code = NotFound desc = an error occurred when try to find container \"424a714db06b41f3008f724ca7debcbb09ee3d6bfab5d2685babc42e9ef06f37\": not found" Mar 7 01:16:42.216554 kubelet[3365]: I0307 01:16:42.215978 3365 scope.go:117] "RemoveContainer" containerID="9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2" Mar 7 01:16:42.216680 containerd[2090]: time="2026-03-07T01:16:42.216452587Z" level=error msg="ContainerStatus for \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\": not found" Mar 7 01:16:42.218160 kubelet[3365]: E0307 01:16:42.218129 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\": not found" containerID="9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2" Mar 7 01:16:42.218263 kubelet[3365]: I0307 01:16:42.218171 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2"} err="failed to get container status \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ee547a0f5ed974ed740bdb639fcafa278c4d99e74a2e463c674d95980a92ed2\": not found" Mar 7 01:16:42.218263 kubelet[3365]: I0307 01:16:42.218196 3365 scope.go:117] "RemoveContainer" containerID="65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2" Mar 7 01:16:42.218531 containerd[2090]: time="2026-03-07T01:16:42.218484373Z" level=error msg="ContainerStatus for \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\": not found" Mar 7 01:16:42.219882 kubelet[3365]: I0307 01:16:42.219846 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/821dff98-0f36-4134-be33-26593ebc63dd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:16:42.220032 kubelet[3365]: E0307 01:16:42.220006 3365 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\": not found" containerID="65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2" Mar 7 01:16:42.220088 kubelet[3365]: I0307 01:16:42.220045 3365 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2"} err="failed to get container status \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\": rpc error: code = NotFound desc = an error occurred when try to find container \"65c5afe55b73626e2c38062bb2485ab14d5a151c973d9a1b78255f6665db0de2\": not found" Mar 7 01:16:42.220174 kubelet[3365]: I0307 01:16:42.220153 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-kube-api-access-tlcxv" (OuterVolumeSpecName: "kube-api-access-tlcxv") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "kube-api-access-tlcxv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:16:42.221466 kubelet[3365]: I0307 01:16:42.221435 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/821dff98-0f36-4134-be33-26593ebc63dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:16:42.221964 kubelet[3365]: I0307 01:16:42.221929 3365 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "821dff98-0f36-4134-be33-26593ebc63dd" (UID: "821dff98-0f36-4134-be33-26593ebc63dd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:16:42.310524 kubelet[3365]: I0307 01:16:42.310480 3365 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-bpf-maps\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310524 kubelet[3365]: I0307 01:16:42.310521 3365 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-kernel\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310524 kubelet[3365]: I0307 01:16:42.310539 3365 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-host-proc-sys-net\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310552 3365 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/821dff98-0f36-4134-be33-26593ebc63dd-clustermesh-secrets\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310564 3365 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-run\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310574 3365 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-etc-cni-netd\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310585 3365 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-lib-modules\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310596 3365 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/821dff98-0f36-4134-be33-26593ebc63dd-cilium-config-path\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310606 3365 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-hubble-tls\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310617 3365 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tlcxv\" (UniqueName: \"kubernetes.io/projected/821dff98-0f36-4134-be33-26593ebc63dd-kube-api-access-tlcxv\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.310792 kubelet[3365]: I0307 01:16:42.310629 3365 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-xtables-lock\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.311046 kubelet[3365]: I0307 01:16:42.310641 3365 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-hostproc\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.311046 kubelet[3365]: I0307 01:16:42.310651 3365 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cilium-cgroup\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.311046 kubelet[3365]: I0307 01:16:42.310662 3365 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/821dff98-0f36-4134-be33-26593ebc63dd-cni-path\") on node \"ip-172-31-16-11\" DevicePath \"\"" Mar 7 01:16:42.629453 kubelet[3365]: I0307 01:16:42.629348 3365 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a64b2cf-5465-4dc8-822c-0f042c886154" path="/var/lib/kubelet/pods/5a64b2cf-5465-4dc8-822c-0f042c886154/volumes" Mar 7 01:16:42.630913 kubelet[3365]: I0307 01:16:42.629985 3365 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="821dff98-0f36-4134-be33-26593ebc63dd" path="/var/lib/kubelet/pods/821dff98-0f36-4134-be33-26593ebc63dd/volumes" Mar 7 01:16:42.735722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b-rootfs.mount: Deactivated successfully. Mar 7 01:16:42.735927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22-rootfs.mount: Deactivated successfully. Mar 7 01:16:42.736069 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22-shm.mount: Deactivated successfully. Mar 7 01:16:42.736227 systemd[1]: var-lib-kubelet-pods-5a64b2cf\x2d5465\x2d4dc8\x2d822c\x2d0f042c886154-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5gz4.mount: Deactivated successfully. Mar 7 01:16:42.736375 systemd[1]: var-lib-kubelet-pods-821dff98\x2d0f36\x2d4134\x2dbe33\x2d26593ebc63dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlcxv.mount: Deactivated successfully. Mar 7 01:16:42.736490 systemd[1]: var-lib-kubelet-pods-821dff98\x2d0f36\x2d4134\x2dbe33\x2d26593ebc63dd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 01:16:42.736586 systemd[1]: var-lib-kubelet-pods-821dff98\x2d0f36\x2d4134\x2dbe33\x2d26593ebc63dd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 01:16:43.691179 sshd[5135]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:43.697957 systemd[1]: sshd@21-172.31.16.11:22-68.220.241.50:53862.service: Deactivated successfully. Mar 7 01:16:43.703989 systemd-logind[2063]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:16:43.705122 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:16:43.706203 systemd-logind[2063]: Removed session 22. Mar 7 01:16:43.782017 systemd[1]: Started sshd@22-172.31.16.11:22-68.220.241.50:56414.service - OpenSSH per-connection server daemon (68.220.241.50:56414). Mar 7 01:16:44.265833 sshd[5309]: Accepted publickey for core from 68.220.241.50 port 56414 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:44.267375 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:44.272798 systemd-logind[2063]: New session 23 of user core. Mar 7 01:16:44.280132 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:16:44.399870 ntpd[2052]: Deleting interface #10 lxc_health, fe80::7c98:83ff:fe77:8427%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Mar 7 01:16:44.400284 ntpd[2052]: 7 Mar 01:16:44 ntpd[2052]: Deleting interface #10 lxc_health, fe80::7c98:83ff:fe77:8427%8#123, interface stats: received=0, sent=0, dropped=0, active_time=67 secs Mar 7 01:16:45.109046 sshd[5309]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:45.126299 systemd[1]: sshd@22-172.31.16.11:22-68.220.241.50:56414.service: Deactivated successfully. Mar 7 01:16:45.139960 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:16:45.147998 systemd-logind[2063]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:16:45.152443 systemd-logind[2063]: Removed session 23. Mar 7 01:16:45.187062 systemd[1]: Started sshd@23-172.31.16.11:22-68.220.241.50:56422.service - OpenSSH per-connection server daemon (68.220.241.50:56422). Mar 7 01:16:45.239925 kubelet[3365]: I0307 01:16:45.239879 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-cilium-cgroup\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.239925 kubelet[3365]: I0307 01:16:45.239920 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-cni-path\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240471 kubelet[3365]: I0307 01:16:45.239944 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8906b467-c9da-4d58-84e0-d9caab6f3aec-cilium-ipsec-secrets\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240471 kubelet[3365]: I0307 01:16:45.239964 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-cilium-run\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240471 kubelet[3365]: I0307 01:16:45.239983 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-host-proc-sys-net\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240471 kubelet[3365]: I0307 01:16:45.240004 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-bpf-maps\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240471 kubelet[3365]: I0307 01:16:45.240027 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-host-proc-sys-kernel\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240471 kubelet[3365]: I0307 01:16:45.240062 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-lib-modules\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240643 kubelet[3365]: I0307 01:16:45.240094 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-xtables-lock\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240643 kubelet[3365]: I0307 01:16:45.240122 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8906b467-c9da-4d58-84e0-d9caab6f3aec-clustermesh-secrets\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240643 kubelet[3365]: I0307 01:16:45.240144 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8906b467-c9da-4d58-84e0-d9caab6f3aec-hubble-tls\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240643 kubelet[3365]: I0307 01:16:45.240170 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glw4n\" (UniqueName: \"kubernetes.io/projected/8906b467-c9da-4d58-84e0-d9caab6f3aec-kube-api-access-glw4n\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240643 kubelet[3365]: I0307 01:16:45.240200 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-etc-cni-netd\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240643 kubelet[3365]: I0307 01:16:45.240226 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8906b467-c9da-4d58-84e0-d9caab6f3aec-cilium-config-path\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.240893 kubelet[3365]: I0307 01:16:45.240251 3365 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8906b467-c9da-4d58-84e0-d9caab6f3aec-hostproc\") pod \"cilium-xqmlr\" (UID: \"8906b467-c9da-4d58-84e0-d9caab6f3aec\") " pod="kube-system/cilium-xqmlr" Mar 7 01:16:45.389784 containerd[2090]: time="2026-03-07T01:16:45.389658445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqmlr,Uid:8906b467-c9da-4d58-84e0-d9caab6f3aec,Namespace:kube-system,Attempt:0,}" Mar 7 01:16:45.426901 containerd[2090]: time="2026-03-07T01:16:45.426442472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:16:45.426901 containerd[2090]: time="2026-03-07T01:16:45.426536004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:16:45.426901 containerd[2090]: time="2026-03-07T01:16:45.426561516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:45.426901 containerd[2090]: time="2026-03-07T01:16:45.426668284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:16:45.476704 containerd[2090]: time="2026-03-07T01:16:45.476367771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqmlr,Uid:8906b467-c9da-4d58-84e0-d9caab6f3aec,Namespace:kube-system,Attempt:0,} returns sandbox id \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\"" Mar 7 01:16:45.485792 containerd[2090]: time="2026-03-07T01:16:45.485635749Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:16:45.504954 containerd[2090]: time="2026-03-07T01:16:45.504865413Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39db67e0c2c05ca4b4314cc063b4f3c3c0c41463467f358c2436086a5b66023f\"" Mar 7 01:16:45.506387 containerd[2090]: time="2026-03-07T01:16:45.506183871Z" level=info msg="StartContainer for \"39db67e0c2c05ca4b4314cc063b4f3c3c0c41463467f358c2436086a5b66023f\"" Mar 7 01:16:45.561326 containerd[2090]: time="2026-03-07T01:16:45.561017899Z" level=info msg="StartContainer for \"39db67e0c2c05ca4b4314cc063b4f3c3c0c41463467f358c2436086a5b66023f\" returns successfully" Mar 7 01:16:45.629426 containerd[2090]: time="2026-03-07T01:16:45.629336400Z" level=info msg="shim disconnected" id=39db67e0c2c05ca4b4314cc063b4f3c3c0c41463467f358c2436086a5b66023f namespace=k8s.io Mar 7 01:16:45.629426 containerd[2090]: time="2026-03-07T01:16:45.629397869Z" level=warning msg="cleaning up after shim disconnected" id=39db67e0c2c05ca4b4314cc063b4f3c3c0c41463467f358c2436086a5b66023f namespace=k8s.io Mar 7 01:16:45.629426 containerd[2090]: time="2026-03-07T01:16:45.629411173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:45.643944 containerd[2090]: time="2026-03-07T01:16:45.643809367Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:16:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:16:45.681005 sshd[5322]: Accepted publickey for core from 68.220.241.50 port 56422 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:45.682765 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:45.688173 systemd-logind[2063]: New session 24 of user core. Mar 7 01:16:45.693033 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:16:45.752949 kubelet[3365]: E0307 01:16:45.752891 3365 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:16:46.029952 sshd[5322]: pam_unix(sshd:session): session closed for user core Mar 7 01:16:46.034508 systemd[1]: sshd@23-172.31.16.11:22-68.220.241.50:56422.service: Deactivated successfully. Mar 7 01:16:46.039413 systemd-logind[2063]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:16:46.039940 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:16:46.041876 systemd-logind[2063]: Removed session 24. Mar 7 01:16:46.112620 systemd[1]: Started sshd@24-172.31.16.11:22-68.220.241.50:56428.service - OpenSSH per-connection server daemon (68.220.241.50:56428). Mar 7 01:16:46.134037 containerd[2090]: time="2026-03-07T01:16:46.133986300Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:16:46.157968 containerd[2090]: time="2026-03-07T01:16:46.157919852Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09c460fe4014d866b1a18fa8c18228f753e6d6118d66017cc08a287db44a8fe2\"" Mar 7 01:16:46.160171 containerd[2090]: time="2026-03-07T01:16:46.160128683Z" level=info msg="StartContainer for \"09c460fe4014d866b1a18fa8c18228f753e6d6118d66017cc08a287db44a8fe2\"" Mar 7 01:16:46.220345 containerd[2090]: time="2026-03-07T01:16:46.220275107Z" level=info msg="StartContainer for \"09c460fe4014d866b1a18fa8c18228f753e6d6118d66017cc08a287db44a8fe2\" returns successfully" Mar 7 01:16:46.287451 containerd[2090]: time="2026-03-07T01:16:46.286708708Z" level=info msg="shim disconnected" id=09c460fe4014d866b1a18fa8c18228f753e6d6118d66017cc08a287db44a8fe2 namespace=k8s.io Mar 7 01:16:46.287451 containerd[2090]: time="2026-03-07T01:16:46.286775154Z" level=warning msg="cleaning up after shim disconnected" id=09c460fe4014d866b1a18fa8c18228f753e6d6118d66017cc08a287db44a8fe2 namespace=k8s.io Mar 7 01:16:46.287451 containerd[2090]: time="2026-03-07T01:16:46.286789995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:46.597994 sshd[5439]: Accepted publickey for core from 68.220.241.50 port 56428 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:16:46.599841 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:16:46.607861 systemd-logind[2063]: New session 25 of user core. Mar 7 01:16:46.613411 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:16:47.133576 containerd[2090]: time="2026-03-07T01:16:47.133534305Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:16:47.159335 containerd[2090]: time="2026-03-07T01:16:47.159293394Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c55ec0e3a8468d9925b704066225c39388bb0756fe571028f82fbf2710e5560e\"" Mar 7 01:16:47.160125 containerd[2090]: time="2026-03-07T01:16:47.160090357Z" level=info msg="StartContainer for \"c55ec0e3a8468d9925b704066225c39388bb0756fe571028f82fbf2710e5560e\"" Mar 7 01:16:47.233374 containerd[2090]: time="2026-03-07T01:16:47.232533349Z" level=info msg="StartContainer for \"c55ec0e3a8468d9925b704066225c39388bb0756fe571028f82fbf2710e5560e\" returns successfully" Mar 7 01:16:47.278676 containerd[2090]: time="2026-03-07T01:16:47.278603777Z" level=info msg="shim disconnected" id=c55ec0e3a8468d9925b704066225c39388bb0756fe571028f82fbf2710e5560e namespace=k8s.io Mar 7 01:16:47.278676 containerd[2090]: time="2026-03-07T01:16:47.278671228Z" level=warning msg="cleaning up after shim disconnected" id=c55ec0e3a8468d9925b704066225c39388bb0756fe571028f82fbf2710e5560e namespace=k8s.io Mar 7 01:16:47.278676 containerd[2090]: time="2026-03-07T01:16:47.278683575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:47.363570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c55ec0e3a8468d9925b704066225c39388bb0756fe571028f82fbf2710e5560e-rootfs.mount: Deactivated successfully. Mar 7 01:16:48.137726 containerd[2090]: time="2026-03-07T01:16:48.134950277Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:16:48.174286 containerd[2090]: time="2026-03-07T01:16:48.174231543Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a359ed59deeaa122aa1f45594b9b796704d7cec3d0cfd8b84d82cad99ada83d\"" Mar 7 01:16:48.175983 containerd[2090]: time="2026-03-07T01:16:48.175921353Z" level=info msg="StartContainer for \"6a359ed59deeaa122aa1f45594b9b796704d7cec3d0cfd8b84d82cad99ada83d\"" Mar 7 01:16:48.246052 containerd[2090]: time="2026-03-07T01:16:48.246007559Z" level=info msg="StartContainer for \"6a359ed59deeaa122aa1f45594b9b796704d7cec3d0cfd8b84d82cad99ada83d\" returns successfully" Mar 7 01:16:48.277049 containerd[2090]: time="2026-03-07T01:16:48.276972828Z" level=info msg="shim disconnected" id=6a359ed59deeaa122aa1f45594b9b796704d7cec3d0cfd8b84d82cad99ada83d namespace=k8s.io Mar 7 01:16:48.277049 containerd[2090]: time="2026-03-07T01:16:48.277045850Z" level=warning msg="cleaning up after shim disconnected" id=6a359ed59deeaa122aa1f45594b9b796704d7cec3d0cfd8b84d82cad99ada83d namespace=k8s.io Mar 7 01:16:48.277509 containerd[2090]: time="2026-03-07T01:16:48.277059751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:16:48.363843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a359ed59deeaa122aa1f45594b9b796704d7cec3d0cfd8b84d82cad99ada83d-rootfs.mount: Deactivated successfully. Mar 7 01:16:49.143645 containerd[2090]: time="2026-03-07T01:16:49.143592158Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:16:49.166621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3162487726.mount: Deactivated successfully. Mar 7 01:16:49.171388 containerd[2090]: time="2026-03-07T01:16:49.171338721Z" level=info msg="CreateContainer within sandbox \"3054509759d5c1e477a17e8ff1d7030931730c3809433529c762e4a3189ed218\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd96a23fb7f4b388d3e8b655b8473d654a2b2204ad230801e00b230b79a9bcc4\"" Mar 7 01:16:49.172789 containerd[2090]: time="2026-03-07T01:16:49.172130279Z" level=info msg="StartContainer for \"bd96a23fb7f4b388d3e8b655b8473d654a2b2204ad230801e00b230b79a9bcc4\"" Mar 7 01:16:49.243473 containerd[2090]: time="2026-03-07T01:16:49.243307468Z" level=info msg="StartContainer for \"bd96a23fb7f4b388d3e8b655b8473d654a2b2204ad230801e00b230b79a9bcc4\" returns successfully" Mar 7 01:16:49.914735 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 01:16:50.166913 kubelet[3365]: I0307 01:16:50.166753 3365 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xqmlr" podStartSLOduration=5.166730638 podStartE2EDuration="5.166730638s" podCreationTimestamp="2026-03-07 01:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:16:50.166365704 +0000 UTC m=+99.723569796" watchObservedRunningTime="2026-03-07 01:16:50.166730638 +0000 UTC m=+99.723934721" Mar 7 01:16:53.037413 systemd-networkd[1654]: lxc_health: Link UP Mar 7 01:16:53.045669 systemd-networkd[1654]: lxc_health: Gained carrier Mar 7 01:16:53.049152 (udev-worker)[6186]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:16:53.954701 kubelet[3365]: E0307 01:16:53.954499 3365 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:40604->127.0.0.1:44259: write tcp 172.31.16.11:10250->172.31.16.11:41470: write: broken pipe Mar 7 01:16:54.168950 systemd-networkd[1654]: lxc_health: Gained IPv6LL Mar 7 01:16:56.399923 ntpd[2052]: Listen normally on 13 lxc_health [fe80::9cee:52ff:fe9f:7d81%14]:123 Mar 7 01:16:56.400529 ntpd[2052]: 7 Mar 01:16:56 ntpd[2052]: Listen normally on 13 lxc_health [fe80::9cee:52ff:fe9f:7d81%14]:123 Mar 7 01:16:56.712364 systemd[1]: run-containerd-runc-k8s.io-bd96a23fb7f4b388d3e8b655b8473d654a2b2204ad230801e00b230b79a9bcc4-runc.z26SNb.mount: Deactivated successfully. Mar 7 01:17:03.316449 systemd[1]: run-containerd-runc-k8s.io-bd96a23fb7f4b388d3e8b655b8473d654a2b2204ad230801e00b230b79a9bcc4-runc.8jPkZ9.mount: Deactivated successfully. Mar 7 01:17:03.483505 sshd[5439]: pam_unix(sshd:session): session closed for user core Mar 7 01:17:03.487530 systemd[1]: sshd@24-172.31.16.11:22-68.220.241.50:56428.service: Deactivated successfully. Mar 7 01:17:03.493356 systemd-logind[2063]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:17:03.494134 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:17:03.496192 systemd-logind[2063]: Removed session 25. Mar 7 01:17:10.678625 containerd[2090]: time="2026-03-07T01:17:10.678530884Z" level=info msg="StopPodSandbox for \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\"" Mar 7 01:17:10.680103 containerd[2090]: time="2026-03-07T01:17:10.679292452Z" level=info msg="TearDown network for sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" successfully" Mar 7 01:17:10.680103 containerd[2090]: time="2026-03-07T01:17:10.679337083Z" level=info msg="StopPodSandbox for \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" returns successfully" Mar 7 01:17:10.681017 containerd[2090]: time="2026-03-07T01:17:10.680840739Z" level=info msg="RemovePodSandbox for \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\"" Mar 7 01:17:10.681017 containerd[2090]: time="2026-03-07T01:17:10.680904717Z" level=info msg="Forcibly stopping sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\"" Mar 7 01:17:10.681446 containerd[2090]: time="2026-03-07T01:17:10.681246605Z" level=info msg="TearDown network for sandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" successfully" Mar 7 01:17:10.687387 containerd[2090]: time="2026-03-07T01:17:10.687333010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:10.687529 containerd[2090]: time="2026-03-07T01:17:10.687407381Z" level=info msg="RemovePodSandbox \"49b9199dd291f1646a9586dc15730d1da88a37463738488afc407ba0ac905b22\" returns successfully" Mar 7 01:17:10.688018 containerd[2090]: time="2026-03-07T01:17:10.687988356Z" level=info msg="StopPodSandbox for \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\"" Mar 7 01:17:10.688106 containerd[2090]: time="2026-03-07T01:17:10.688080144Z" level=info msg="TearDown network for sandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" successfully" Mar 7 01:17:10.688106 containerd[2090]: time="2026-03-07T01:17:10.688097720Z" level=info msg="StopPodSandbox for \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" returns successfully" Mar 7 01:17:10.688425 containerd[2090]: time="2026-03-07T01:17:10.688400437Z" level=info msg="RemovePodSandbox for \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\"" Mar 7 01:17:10.688425 containerd[2090]: time="2026-03-07T01:17:10.688425410Z" level=info msg="Forcibly stopping sandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\"" Mar 7 01:17:10.688546 containerd[2090]: time="2026-03-07T01:17:10.688483666Z" level=info msg="TearDown network for sandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" successfully" Mar 7 01:17:10.694635 containerd[2090]: time="2026-03-07T01:17:10.694534374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:17:10.694814 containerd[2090]: time="2026-03-07T01:17:10.694655060Z" level=info msg="RemovePodSandbox \"b45fb913ccdd0d8f5cecf085b48aa2434d73ea51af46b18fb5387bdd0484111b\" returns successfully" Mar 7 01:17:19.224500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543-rootfs.mount: Deactivated successfully. Mar 7 01:17:19.258004 containerd[2090]: time="2026-03-07T01:17:19.257926812Z" level=info msg="shim disconnected" id=046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543 namespace=k8s.io Mar 7 01:17:19.258004 containerd[2090]: time="2026-03-07T01:17:19.257982752Z" level=warning msg="cleaning up after shim disconnected" id=046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543 namespace=k8s.io Mar 7 01:17:19.258004 containerd[2090]: time="2026-03-07T01:17:19.257996342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:20.229463 kubelet[3365]: I0307 01:17:20.229415 3365 scope.go:117] "RemoveContainer" containerID="046db1dbf9677f053746ccda8e7b683eb45298a446fed18a4562cf7d44995543" Mar 7 01:17:20.232670 containerd[2090]: time="2026-03-07T01:17:20.232631076Z" level=info msg="CreateContainer within sandbox \"4d1ee218d999d7a76fd373fb04e602dbd7701757517310834ae918bd8559375d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:17:20.255554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193703705.mount: Deactivated successfully. Mar 7 01:17:20.261774 containerd[2090]: time="2026-03-07T01:17:20.261723139Z" level=info msg="CreateContainer within sandbox \"4d1ee218d999d7a76fd373fb04e602dbd7701757517310834ae918bd8559375d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d02be36a25151fc00c631cd1857fb6ca906b506a88fc7ba6dbb2ad40b2a7c437\"" Mar 7 01:17:20.263618 containerd[2090]: time="2026-03-07T01:17:20.262307238Z" level=info msg="StartContainer for \"d02be36a25151fc00c631cd1857fb6ca906b506a88fc7ba6dbb2ad40b2a7c437\"" Mar 7 01:17:20.350182 containerd[2090]: time="2026-03-07T01:17:20.350137135Z" level=info msg="StartContainer for \"d02be36a25151fc00c631cd1857fb6ca906b506a88fc7ba6dbb2ad40b2a7c437\" returns successfully" Mar 7 01:17:23.189135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d-rootfs.mount: Deactivated successfully. Mar 7 01:17:23.206014 containerd[2090]: time="2026-03-07T01:17:23.205922825Z" level=info msg="shim disconnected" id=dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d namespace=k8s.io Mar 7 01:17:23.206014 containerd[2090]: time="2026-03-07T01:17:23.205996902Z" level=warning msg="cleaning up after shim disconnected" id=dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d namespace=k8s.io Mar 7 01:17:23.206014 containerd[2090]: time="2026-03-07T01:17:23.206011510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:17:23.239795 kubelet[3365]: I0307 01:17:23.239761 3365 scope.go:117] "RemoveContainer" containerID="dada990fdfee34d0fbc004e403f0d25ace210080ec8aaab327d5c8faf748092d" Mar 7 01:17:23.242470 containerd[2090]: time="2026-03-07T01:17:23.242430125Z" level=info msg="CreateContainer within sandbox \"570c08d5f080173c9bf433e3ebf75055a45e25d11b11f22017670bacdddfc599\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:17:23.272664 containerd[2090]: time="2026-03-07T01:17:23.272494919Z" level=info msg="CreateContainer within sandbox \"570c08d5f080173c9bf433e3ebf75055a45e25d11b11f22017670bacdddfc599\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"773ab7f047528bd9d550bccc95ffd08bc73ab884aa3163248a70a58978b864ab\"" Mar 7 01:17:23.273381 containerd[2090]: time="2026-03-07T01:17:23.273325090Z" level=info msg="StartContainer for \"773ab7f047528bd9d550bccc95ffd08bc73ab884aa3163248a70a58978b864ab\"" Mar 7 01:17:23.354726 containerd[2090]: time="2026-03-07T01:17:23.354662544Z" level=info msg="StartContainer for \"773ab7f047528bd9d550bccc95ffd08bc73ab884aa3163248a70a58978b864ab\" returns successfully" Mar 7 01:17:23.535986 kubelet[3365]: E0307 01:17:23.534964 3365 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": context deadline exceeded" Mar 7 01:17:33.537788 kubelet[3365]: E0307 01:17:33.537642 3365 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-11?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"