Mar 14 00:20:58.003701 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:20:58.003747 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:20:58.003766 kernel: BIOS-provided physical RAM map: Mar 14 00:20:58.003778 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:20:58.003788 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 14 00:20:58.003799 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 14 00:20:58.003813 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 14 00:20:58.003826 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 14 00:20:58.003838 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 14 00:20:58.003854 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 14 00:20:58.003866 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 14 00:20:58.003878 kernel: NX (Execute Disable) protection: active Mar 14 00:20:58.003890 kernel: APIC: Static calls initialized Mar 14 00:20:58.003903 kernel: efi: EFI v2.7 by EDK II Mar 14 00:20:58.003919 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 14 00:20:58.003937 kernel: SMBIOS 2.7 present. Mar 14 00:20:58.003951 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 14 00:20:58.003965 kernel: Hypervisor detected: KVM Mar 14 00:20:58.003980 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:20:58.003993 kernel: kvm-clock: using sched offset of 3620150706 cycles Mar 14 00:20:58.004007 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:20:58.004019 kernel: tsc: Detected 2499.996 MHz processor Mar 14 00:20:58.004031 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:20:58.004043 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:20:58.004057 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 14 00:20:58.004075 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:20:58.004090 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:20:58.004104 kernel: Using GB pages for direct mapping Mar 14 00:20:58.004118 kernel: Secure boot disabled Mar 14 00:20:58.004133 kernel: ACPI: Early table checksum verification disabled Mar 14 00:20:58.004146 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 14 00:20:58.004159 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:20:58.004171 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:20:58.004185 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:20:58.004202 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 14 00:20:58.004214 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 14 00:20:58.004227 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:20:58.004241 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:20:58.004254 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 14 00:20:58.004282 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 14 00:20:58.004303 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:20:58.004321 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:20:58.004335 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 14 00:20:58.004350 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 14 00:20:58.004364 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 14 00:20:58.004377 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 14 00:20:58.004391 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 14 00:20:58.004410 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 14 00:20:58.004425 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 14 00:20:58.004440 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 14 00:20:58.004455 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 14 00:20:58.004470 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 14 00:20:58.004486 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 14 00:20:58.004502 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 14 00:20:58.004518 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:20:58.004534 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:20:58.004550 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 14 00:20:58.004569 kernel: NUMA: Initialized distance table, cnt=1 Mar 14 00:20:58.004584 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 14 00:20:58.004599 kernel: Zone ranges: Mar 14 00:20:58.004616 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:20:58.004631 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 14 00:20:58.004647 kernel: Normal empty Mar 14 00:20:58.004663 kernel: Movable zone start for each node Mar 14 00:20:58.004678 kernel: Early memory node ranges Mar 14 00:20:58.004694 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:20:58.004713 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 14 00:20:58.004728 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 14 00:20:58.004744 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 14 00:20:58.004760 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:20:58.004776 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:20:58.004792 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 14 00:20:58.004809 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 14 00:20:58.004824 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 14 00:20:58.004840 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:20:58.004856 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 14 00:20:58.004877 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:20:58.004893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:20:58.004908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:20:58.004924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:20:58.004940 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:20:58.004956 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:20:58.004972 kernel: TSC deadline timer available Mar 14 00:20:58.004988 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:20:58.005005 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:20:58.005025 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 14 00:20:58.005041 kernel: Booting paravirtualized kernel on KVM Mar 14 00:20:58.005057 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:20:58.005074 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:20:58.005090 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:20:58.005106 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:20:58.005121 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:20:58.005137 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:20:58.005153 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:20:58.005176 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:20:58.005193 kernel: random: crng init done Mar 14 00:20:58.005209 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:20:58.005224 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:20:58.005240 kernel: Fallback order for Node 0: 0 Mar 14 00:20:58.005256 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 14 00:20:58.008298 kernel: Policy zone: DMA32 Mar 14 00:20:58.008326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:20:58.008351 kernel: Memory: 1874624K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162920K reserved, 0K cma-reserved) Mar 14 00:20:58.008368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:20:58.008384 kernel: Kernel/User page tables isolation: enabled Mar 14 00:20:58.008400 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:20:58.008416 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:20:58.008431 kernel: Dynamic Preempt: voluntary Mar 14 00:20:58.008446 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:20:58.008462 kernel: rcu: RCU event tracing is enabled. Mar 14 00:20:58.008477 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:20:58.008497 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:20:58.008513 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:20:58.008529 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:20:58.008542 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:20:58.008555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:20:58.008568 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:20:58.008582 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:20:58.008611 kernel: Console: colour dummy device 80x25 Mar 14 00:20:58.008625 kernel: printk: console [tty0] enabled Mar 14 00:20:58.008641 kernel: printk: console [ttyS0] enabled Mar 14 00:20:58.008656 kernel: ACPI: Core revision 20230628 Mar 14 00:20:58.008671 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 14 00:20:58.008689 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:20:58.008704 kernel: x2apic enabled Mar 14 00:20:58.008720 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:20:58.008736 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 14 00:20:58.008751 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 14 00:20:58.008770 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 14 00:20:58.008785 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 14 00:20:58.008802 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:20:58.008819 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:20:58.008835 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:20:58.008852 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 14 00:20:58.008869 kernel: RETBleed: Vulnerable Mar 14 00:20:58.008885 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:20:58.008902 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:20:58.008919 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:20:58.008940 kernel: GDS: Unknown: Dependent on hypervisor status Mar 14 00:20:58.008957 kernel: active return thunk: its_return_thunk Mar 14 00:20:58.008973 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:20:58.008990 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:20:58.009008 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:20:58.009025 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:20:58.009043 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 14 00:20:58.009060 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 14 00:20:58.009077 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:20:58.009094 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:20:58.009112 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:20:58.009133 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:20:58.009150 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:20:58.009166 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 14 00:20:58.009180 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 14 00:20:58.009194 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 14 00:20:58.009210 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 14 00:20:58.009225 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 14 00:20:58.009240 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 14 00:20:58.009255 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 14 00:20:58.011074 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:20:58.011097 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:20:58.011120 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:20:58.011136 kernel: landlock: Up and running. Mar 14 00:20:58.011151 kernel: SELinux: Initializing. Mar 14 00:20:58.011167 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:20:58.011184 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:20:58.011200 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 14 00:20:58.011214 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:20:58.011231 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:20:58.011248 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:20:58.011284 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:20:58.011313 kernel: signal: max sigframe size: 3632 Mar 14 00:20:58.011331 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:20:58.011349 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:20:58.011366 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:20:58.011381 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:20:58.011396 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:20:58.011410 kernel: .... node #0, CPUs: #1 Mar 14 00:20:58.011426 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 14 00:20:58.011442 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:20:58.011460 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:20:58.011474 kernel: smpboot: Max logical packages: 1 Mar 14 00:20:58.011490 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 14 00:20:58.011504 kernel: devtmpfs: initialized Mar 14 00:20:58.011519 kernel: x86/mm: Memory block size: 128MB Mar 14 00:20:58.011534 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 14 00:20:58.011549 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:20:58.011563 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:20:58.011578 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:20:58.011596 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:20:58.011610 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:20:58.011626 kernel: audit: type=2000 audit(1773447659.095:1): state=initialized audit_enabled=0 res=1 Mar 14 00:20:58.011641 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:20:58.011656 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:20:58.011671 kernel: cpuidle: using governor menu Mar 14 00:20:58.011687 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:20:58.011701 kernel: dca service started, version 1.12.1 Mar 14 00:20:58.011717 kernel: PCI: Using configuration type 1 for base access Mar 14 00:20:58.011735 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:20:58.011751 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:20:58.011766 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:20:58.011781 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:20:58.011797 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:20:58.011812 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:20:58.011827 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:20:58.011842 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:20:58.011857 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 14 00:20:58.011876 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:20:58.011892 kernel: ACPI: Interpreter enabled Mar 14 00:20:58.011907 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:20:58.011922 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:20:58.011939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:20:58.011954 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:20:58.011970 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 14 00:20:58.011986 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:20:58.012214 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:20:58.012408 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 14 00:20:58.012553 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 14 00:20:58.012574 kernel: acpiphp: Slot [3] registered Mar 14 00:20:58.012592 kernel: acpiphp: Slot [4] registered Mar 14 00:20:58.012610 kernel: acpiphp: Slot [5] registered Mar 14 00:20:58.012627 kernel: acpiphp: Slot [6] registered Mar 14 00:20:58.012644 kernel: acpiphp: Slot [7] registered Mar 14 00:20:58.012665 kernel: acpiphp: Slot [8] registered Mar 14 00:20:58.012682 kernel: acpiphp: Slot [9] registered Mar 14 00:20:58.012699 kernel: acpiphp: Slot [10] registered Mar 14 00:20:58.012716 kernel: acpiphp: Slot [11] registered Mar 14 00:20:58.012733 kernel: acpiphp: Slot [12] registered Mar 14 00:20:58.012749 kernel: acpiphp: Slot [13] registered Mar 14 00:20:58.012766 kernel: acpiphp: Slot [14] registered Mar 14 00:20:58.012784 kernel: acpiphp: Slot [15] registered Mar 14 00:20:58.012802 kernel: acpiphp: Slot [16] registered Mar 14 00:20:58.012818 kernel: acpiphp: Slot [17] registered Mar 14 00:20:58.012838 kernel: acpiphp: Slot [18] registered Mar 14 00:20:58.012855 kernel: acpiphp: Slot [19] registered Mar 14 00:20:58.012872 kernel: acpiphp: Slot [20] registered Mar 14 00:20:58.012889 kernel: acpiphp: Slot [21] registered Mar 14 00:20:58.012906 kernel: acpiphp: Slot [22] registered Mar 14 00:20:58.012923 kernel: acpiphp: Slot [23] registered Mar 14 00:20:58.012940 kernel: acpiphp: Slot [24] registered Mar 14 00:20:58.012957 kernel: acpiphp: Slot [25] registered Mar 14 00:20:58.012973 kernel: acpiphp: Slot [26] registered Mar 14 00:20:58.012993 kernel: acpiphp: Slot [27] registered Mar 14 00:20:58.013010 kernel: acpiphp: Slot [28] registered Mar 14 00:20:58.013027 kernel: acpiphp: Slot [29] registered Mar 14 00:20:58.013044 kernel: acpiphp: Slot [30] registered Mar 14 00:20:58.013060 kernel: acpiphp: Slot [31] registered Mar 14 00:20:58.013077 kernel: PCI host bridge to bus 0000:00 Mar 14 00:20:58.013229 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:20:58.013400 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:20:58.013534 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:20:58.013658 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 14 00:20:58.013784 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 14 00:20:58.013909 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:20:58.014068 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 14 00:20:58.014222 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 14 00:20:58.016604 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 14 00:20:58.016771 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 14 00:20:58.016910 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 14 00:20:58.017045 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 14 00:20:58.017180 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 14 00:20:58.017334 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 14 00:20:58.017470 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 14 00:20:58.017604 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 14 00:20:58.017753 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 14 00:20:58.017889 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 14 00:20:58.018022 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:20:58.018156 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 14 00:20:58.020353 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:20:58.020528 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:20:58.020675 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 14 00:20:58.020820 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:20:58.020958 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 14 00:20:58.020978 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:20:58.020994 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:20:58.021010 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:20:58.021025 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:20:58.021041 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 14 00:20:58.021062 kernel: iommu: Default domain type: Translated Mar 14 00:20:58.021077 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:20:58.021094 kernel: efivars: Registered efivars operations Mar 14 00:20:58.021109 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:20:58.021125 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:20:58.021140 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 14 00:20:58.021156 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 14 00:20:58.021303 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 14 00:20:58.021442 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 14 00:20:58.021582 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:20:58.021602 kernel: vgaarb: loaded Mar 14 00:20:58.021618 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 14 00:20:58.021633 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 14 00:20:58.021649 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:20:58.021664 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:20:58.021680 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:20:58.021695 kernel: pnp: PnP ACPI init Mar 14 00:20:58.021714 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:20:58.021730 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:20:58.021746 kernel: NET: Registered PF_INET protocol family Mar 14 00:20:58.021762 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:20:58.021778 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 14 00:20:58.021794 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:20:58.021809 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:20:58.021825 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 14 00:20:58.021841 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 14 00:20:58.021859 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:20:58.021875 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:20:58.021891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:20:58.021906 kernel: NET: Registered PF_XDP protocol family Mar 14 00:20:58.022032 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:20:58.022155 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:20:58.024162 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:20:58.024512 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 14 00:20:58.024652 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 14 00:20:58.024805 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 14 00:20:58.024827 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:20:58.024846 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:20:58.024863 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 14 00:20:58.024880 kernel: clocksource: Switched to clocksource tsc Mar 14 00:20:58.024897 kernel: Initialise system trusted keyrings Mar 14 00:20:58.024915 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 14 00:20:58.024933 kernel: Key type asymmetric registered Mar 14 00:20:58.024953 kernel: Asymmetric key parser 'x509' registered Mar 14 00:20:58.024969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:20:58.024987 kernel: io scheduler mq-deadline registered Mar 14 00:20:58.025003 kernel: io scheduler kyber registered Mar 14 00:20:58.025020 kernel: io scheduler bfq registered Mar 14 00:20:58.025038 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:20:58.025055 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:20:58.025072 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:20:58.025089 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:20:58.025109 kernel: i8042: Warning: Keylock active Mar 14 00:20:58.025126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:20:58.025143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:20:58.025309 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 14 00:20:58.025442 kernel: rtc_cmos 00:00: registered as rtc0 Mar 14 00:20:58.025577 kernel: rtc_cmos 00:00: setting system clock to 2026-03-14T00:20:57 UTC (1773447657) Mar 14 00:20:58.025710 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 14 00:20:58.025730 kernel: intel_pstate: CPU model not supported Mar 14 00:20:58.025753 kernel: efifb: probing for efifb Mar 14 00:20:58.025770 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 14 00:20:58.025787 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 14 00:20:58.025804 kernel: efifb: scrolling: redraw Mar 14 00:20:58.025821 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:20:58.025838 kernel: Console: switching to colour frame buffer device 100x37 Mar 14 00:20:58.025855 kernel: fb0: EFI VGA frame buffer device Mar 14 00:20:58.025872 kernel: pstore: Using crash dump compression: deflate Mar 14 00:20:58.025890 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:20:58.025910 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:20:58.025927 kernel: Segment Routing with IPv6 Mar 14 00:20:58.025944 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:20:58.025960 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:20:58.025977 kernel: Key type dns_resolver registered Mar 14 00:20:58.025994 kernel: IPI shorthand broadcast: enabled Mar 14 00:20:58.026037 kernel: sched_clock: Marking stable (472002926, 129725241)->(669330231, -67602064) Mar 14 00:20:58.026058 kernel: registered taskstats version 1 Mar 14 00:20:58.026076 kernel: Loading compiled-in X.509 certificates Mar 14 00:20:58.026097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:20:58.026115 kernel: Key type .fscrypt registered Mar 14 00:20:58.026132 kernel: Key type fscrypt-provisioning registered Mar 14 00:20:58.026149 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:20:58.026168 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:20:58.026186 kernel: ima: No architecture policies found Mar 14 00:20:58.026203 kernel: clk: Disabling unused clocks Mar 14 00:20:58.026221 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:20:58.026239 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:20:58.026260 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:20:58.026295 kernel: Run /init as init process Mar 14 00:20:58.026312 kernel: with arguments: Mar 14 00:20:58.026330 kernel: /init Mar 14 00:20:58.026347 kernel: with environment: Mar 14 00:20:58.026365 kernel: HOME=/ Mar 14 00:20:58.026382 kernel: TERM=linux Mar 14 00:20:58.026403 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:20:58.026430 systemd[1]: Detected virtualization amazon. Mar 14 00:20:58.026448 systemd[1]: Detected architecture x86-64. Mar 14 00:20:58.026466 systemd[1]: Running in initrd. Mar 14 00:20:58.026484 systemd[1]: No hostname configured, using default hostname. Mar 14 00:20:58.026501 systemd[1]: Hostname set to . Mar 14 00:20:58.026520 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:20:58.026547 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:20:58.026565 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:20:58.026588 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:20:58.026607 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:20:58.026626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:20:58.026645 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:20:58.026666 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:20:58.026691 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:20:58.026710 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:20:58.026728 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:20:58.026747 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:20:58.026765 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:20:58.026784 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:20:58.026803 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:20:58.026824 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:20:58.026843 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:20:58.026861 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:20:58.026880 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:20:58.026899 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:20:58.026918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:20:58.026937 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:20:58.026955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:20:58.026974 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:20:58.026996 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:20:58.027015 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:20:58.027034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:20:58.027052 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:20:58.027071 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:20:58.027090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:20:58.027109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:20:58.027127 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:20:58.027174 systemd-journald[179]: Collecting audit messages is disabled. Mar 14 00:20:58.027215 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:20:58.027234 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:20:58.027253 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:20:58.031202 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:20:58.031227 systemd-journald[179]: Journal started Mar 14 00:20:58.031278 systemd-journald[179]: Runtime Journal (/run/log/journal/ec29b4741704542047cd8bf899e55ad5) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:20:57.993256 systemd-modules-load[180]: Inserted module 'overlay' Mar 14 00:20:58.040298 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:20:58.042319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:20:58.044005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:20:58.051702 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:20:58.051734 kernel: Bridge firewalling registered Mar 14 00:20:58.052245 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 14 00:20:58.055440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:20:58.064488 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:20:58.068463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:20:58.071300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:20:58.080534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:20:58.086139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:20:58.092836 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:20:58.099698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:20:58.107469 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:20:58.108497 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:20:58.112438 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:20:58.124348 dracut-cmdline[212]: dracut-dracut-053 Mar 14 00:20:58.128956 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:20:58.166088 systemd-resolved[214]: Positive Trust Anchors: Mar 14 00:20:58.166107 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:20:58.166171 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:20:58.175840 systemd-resolved[214]: Defaulting to hostname 'linux'. Mar 14 00:20:58.177326 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:20:58.178061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:20:58.218299 kernel: SCSI subsystem initialized Mar 14 00:20:58.229291 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:20:58.239303 kernel: iscsi: registered transport (tcp) Mar 14 00:20:58.260461 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:20:58.260542 kernel: QLogic iSCSI HBA Driver Mar 14 00:20:58.300061 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:20:58.304456 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:20:58.331561 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:20:58.331637 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:20:58.331660 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:20:58.374301 kernel: raid6: avx512x4 gen() 18350 MB/s Mar 14 00:20:58.392290 kernel: raid6: avx512x2 gen() 18402 MB/s Mar 14 00:20:58.410292 kernel: raid6: avx512x1 gen() 18369 MB/s Mar 14 00:20:58.428287 kernel: raid6: avx2x4 gen() 18329 MB/s Mar 14 00:20:58.446289 kernel: raid6: avx2x2 gen() 18307 MB/s Mar 14 00:20:58.464516 kernel: raid6: avx2x1 gen() 14011 MB/s Mar 14 00:20:58.464553 kernel: raid6: using algorithm avx512x2 gen() 18402 MB/s Mar 14 00:20:58.483496 kernel: raid6: .... xor() 24810 MB/s, rmw enabled Mar 14 00:20:58.483540 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:20:58.505306 kernel: xor: automatically using best checksumming function avx Mar 14 00:20:58.665299 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:20:58.675252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:20:58.683538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:20:58.696607 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 14 00:20:58.701677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:20:58.711458 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:20:58.728737 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Mar 14 00:20:58.758364 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:20:58.763503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:20:58.814803 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:20:58.822478 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:20:58.851443 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:20:58.854061 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:20:58.855860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:20:58.856962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:20:58.864514 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:20:58.898868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:20:58.918324 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:20:58.925909 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:20:58.926207 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:20:58.941298 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:20:58.947378 kernel: AES CTR mode by8 optimization enabled Mar 14 00:20:58.954295 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 14 00:20:58.968123 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:20:58.974029 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:20:58.974252 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:c2:c2:2e:b8:d1 Mar 14 00:20:58.974484 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 14 00:20:58.972387 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:20:58.976035 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:20:58.977511 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:20:58.977820 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:20:58.980524 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:20:58.988360 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:20:58.989041 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:20:58.999561 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:20:58.999626 kernel: GPT:9289727 != 33554431 Mar 14 00:20:58.999647 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:20:59.001439 kernel: GPT:9289727 != 33554431 Mar 14 00:20:59.002314 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:20:59.003458 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:20:59.013628 (udev-worker)[446]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:20:59.030915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:20:59.037577 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:20:59.071757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:20:59.090898 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (455) Mar 14 00:20:59.110297 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (443) Mar 14 00:20:59.167187 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:20:59.186857 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:20:59.197731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:20:59.198343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:20:59.205801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:20:59.213472 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:20:59.220544 disk-uuid[628]: Primary Header is updated. Mar 14 00:20:59.220544 disk-uuid[628]: Secondary Entries is updated. Mar 14 00:20:59.220544 disk-uuid[628]: Secondary Header is updated. Mar 14 00:20:59.225320 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:20:59.233308 kernel: GPT:disk_guids don't match. Mar 14 00:20:59.233381 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:20:59.233400 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:20:59.240312 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:21:00.243095 disk-uuid[629]: The operation has completed successfully. Mar 14 00:21:00.244359 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:21:00.384004 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:21:00.384129 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:21:00.409525 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:21:00.414774 sh[972]: Success Mar 14 00:21:00.436292 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:21:00.538734 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:21:00.548418 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:21:00.549651 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:21:00.585404 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:21:00.585474 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:21:00.587325 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:21:00.590077 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:21:00.590119 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:21:00.619295 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:21:00.623214 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:21:00.624425 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:21:00.628463 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:21:00.641537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:21:00.676216 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:21:00.676309 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:21:00.676336 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:21:00.683291 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:21:00.695336 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:21:00.698302 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:21:00.705108 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:21:00.711558 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:21:00.743086 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:21:00.753607 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:21:00.793889 systemd-networkd[1164]: lo: Link UP Mar 14 00:21:00.793900 systemd-networkd[1164]: lo: Gained carrier Mar 14 00:21:00.795709 systemd-networkd[1164]: Enumeration completed Mar 14 00:21:00.796146 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:21:00.796151 systemd-networkd[1164]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:21:00.797237 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:21:00.799231 systemd[1]: Reached target network.target - Network. Mar 14 00:21:00.801219 systemd-networkd[1164]: eth0: Link UP Mar 14 00:21:00.801225 systemd-networkd[1164]: eth0: Gained carrier Mar 14 00:21:00.801239 systemd-networkd[1164]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:21:00.826485 systemd-networkd[1164]: eth0: DHCPv4 address 172.31.23.47/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:21:00.875827 ignition[1123]: Ignition 2.19.0 Mar 14 00:21:00.875841 ignition[1123]: Stage: fetch-offline Mar 14 00:21:00.876101 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:00.876114 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:00.878218 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:21:00.876576 ignition[1123]: Ignition finished successfully Mar 14 00:21:00.882589 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:21:00.905459 ignition[1173]: Ignition 2.19.0 Mar 14 00:21:00.905474 ignition[1173]: Stage: fetch Mar 14 00:21:00.905928 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:00.905945 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:00.906067 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:00.914498 ignition[1173]: PUT result: OK Mar 14 00:21:00.916224 ignition[1173]: parsed url from cmdline: "" Mar 14 00:21:00.916236 ignition[1173]: no config URL provided Mar 14 00:21:00.916245 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:21:00.916261 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:21:00.916298 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:00.916899 ignition[1173]: PUT result: OK Mar 14 00:21:00.916956 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:21:00.917579 ignition[1173]: GET result: OK Mar 14 00:21:00.917669 ignition[1173]: parsing config with SHA512: cf48c3bf92d894e37ff5e05e85602b76f2afadbed710cd7395ddc21cec073ec8f089d1b8eb090dfd1281257b43557371f1fea35e51956a64816ffb631c9f2b30 Mar 14 00:21:00.922580 unknown[1173]: fetched base config from "system" Mar 14 00:21:00.923144 ignition[1173]: fetch: fetch complete Mar 14 00:21:00.922595 unknown[1173]: fetched base config from "system" Mar 14 00:21:00.923152 ignition[1173]: fetch: fetch passed Mar 14 00:21:00.922604 unknown[1173]: fetched user config from "aws" Mar 14 00:21:00.923199 ignition[1173]: Ignition finished successfully Mar 14 00:21:00.927223 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:21:00.930510 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:21:00.948509 ignition[1179]: Ignition 2.19.0 Mar 14 00:21:00.948523 ignition[1179]: Stage: kargs Mar 14 00:21:00.948965 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:00.948979 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:00.949099 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:00.949980 ignition[1179]: PUT result: OK Mar 14 00:21:00.952494 ignition[1179]: kargs: kargs passed Mar 14 00:21:00.952578 ignition[1179]: Ignition finished successfully Mar 14 00:21:00.953941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:21:00.960455 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:21:00.974490 ignition[1185]: Ignition 2.19.0 Mar 14 00:21:00.974503 ignition[1185]: Stage: disks Mar 14 00:21:00.975013 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:00.975027 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:00.975184 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:00.976005 ignition[1185]: PUT result: OK Mar 14 00:21:00.978382 ignition[1185]: disks: disks passed Mar 14 00:21:00.978983 ignition[1185]: Ignition finished successfully Mar 14 00:21:00.980134 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:21:00.981085 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:21:00.981499 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:21:00.982013 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:21:00.982644 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:21:00.983155 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:21:00.992450 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:21:01.023802 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:21:01.027783 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:21:01.034368 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:21:01.136314 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:21:01.136200 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:21:01.137386 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:21:01.150425 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:21:01.153991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:21:01.155525 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:21:01.155594 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:21:01.155628 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:21:01.167788 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:21:01.172292 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1212) Mar 14 00:21:01.177302 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:21:01.177365 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:21:01.177388 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:21:01.179542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:21:01.186304 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:21:01.187191 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:21:01.252044 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:21:01.257701 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:21:01.262889 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:21:01.268484 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:21:01.380439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:21:01.386396 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:21:01.390475 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:21:01.399306 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:21:01.422299 ignition[1325]: INFO : Ignition 2.19.0 Mar 14 00:21:01.422299 ignition[1325]: INFO : Stage: mount Mar 14 00:21:01.425595 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:01.425595 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:01.425595 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:01.428470 ignition[1325]: INFO : PUT result: OK Mar 14 00:21:01.434862 ignition[1325]: INFO : mount: mount passed Mar 14 00:21:01.435615 ignition[1325]: INFO : Ignition finished successfully Mar 14 00:21:01.436648 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:21:01.442451 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:21:01.446255 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:21:01.582347 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:21:01.587535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:21:01.607293 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1337) Mar 14 00:21:01.611195 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:21:01.611260 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:21:01.611314 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:21:01.618682 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:21:01.620226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:21:01.643897 ignition[1353]: INFO : Ignition 2.19.0 Mar 14 00:21:01.644590 ignition[1353]: INFO : Stage: files Mar 14 00:21:01.645668 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:01.645668 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:01.645668 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:01.647485 ignition[1353]: INFO : PUT result: OK Mar 14 00:21:01.650873 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:21:01.652017 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:21:01.652017 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:21:01.657411 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:21:01.658237 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:21:01.658237 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:21:01.657904 unknown[1353]: wrote ssh authorized keys file for user: core Mar 14 00:21:01.660695 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:21:01.661490 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:21:01.748483 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:21:01.928565 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:21:01.928565 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:21:01.930389 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:21:02.160753 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:21:02.168434 systemd-networkd[1164]: eth0: Gained IPv6LL Mar 14 00:21:02.364212 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:21:02.364212 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:21:02.366852 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:21:02.373353 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:21:02.373353 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:21:02.373353 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:21:02.373353 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:21:02.373353 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:21:02.373353 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:21:02.737095 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:21:03.236027 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:21:03.236027 ignition[1353]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:21:03.239317 ignition[1353]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:21:03.241621 ignition[1353]: INFO : files: files passed Mar 14 00:21:03.241621 ignition[1353]: INFO : Ignition finished successfully Mar 14 00:21:03.242041 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:21:03.248651 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:21:03.251427 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:21:03.258620 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:21:03.258758 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:21:03.275738 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:21:03.275738 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:21:03.279144 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:21:03.280690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:21:03.281817 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:21:03.296500 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:21:03.329541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:21:03.329690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:21:03.331371 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:21:03.332205 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:21:03.333014 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:21:03.340471 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:21:03.353461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:21:03.357490 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:21:03.371183 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:21:03.371910 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:21:03.372851 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:21:03.373722 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:21:03.373900 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:21:03.375070 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:21:03.375908 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:21:03.376681 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:21:03.377442 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:21:03.378194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:21:03.379024 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:21:03.379788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:21:03.380572 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:21:03.381714 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:21:03.382451 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:21:03.383207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:21:03.383406 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:21:03.384467 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:21:03.385249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:21:03.385933 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:21:03.386087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:21:03.386818 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:21:03.386984 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:21:03.388348 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:21:03.388526 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:21:03.389253 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:21:03.389422 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:21:03.401568 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:21:03.402287 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:21:03.402589 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:21:03.405544 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:21:03.406098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:21:03.406347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:21:03.408542 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:21:03.408741 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:21:03.422982 ignition[1407]: INFO : Ignition 2.19.0 Mar 14 00:21:03.424374 ignition[1407]: INFO : Stage: umount Mar 14 00:21:03.424644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:21:03.424776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:21:03.429861 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:21:03.429861 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:21:03.429861 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:21:03.429861 ignition[1407]: INFO : PUT result: OK Mar 14 00:21:03.433617 ignition[1407]: INFO : umount: umount passed Mar 14 00:21:03.434144 ignition[1407]: INFO : Ignition finished successfully Mar 14 00:21:03.436159 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:21:03.436621 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:21:03.439117 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:21:03.439183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:21:03.439808 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:21:03.439865 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:21:03.441380 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:21:03.441433 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:21:03.442404 systemd[1]: Stopped target network.target - Network. Mar 14 00:21:03.442872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:21:03.442934 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:21:03.444380 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:21:03.444848 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:21:03.445308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:21:03.445888 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:21:03.447346 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:21:03.447835 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:21:03.447893 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:21:03.448476 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:21:03.448527 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:21:03.449080 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:21:03.449141 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:21:03.450631 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:21:03.450690 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:21:03.454624 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:21:03.455879 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:21:03.456334 systemd-networkd[1164]: eth0: DHCPv6 lease lost Mar 14 00:21:03.458080 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:21:03.458943 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:21:03.459085 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:21:03.459924 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:21:03.460037 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:21:03.462113 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:21:03.462190 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:21:03.463104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:21:03.463168 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:21:03.467475 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:21:03.467901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:21:03.467972 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:21:03.469098 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:21:03.473948 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:21:03.474087 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:21:03.476296 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:21:03.477174 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:21:03.478241 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:21:03.478434 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:21:03.479569 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:21:03.479629 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:21:03.488918 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:21:03.489835 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:21:03.492635 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:21:03.492710 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:21:03.493907 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:21:03.493955 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:21:03.494480 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:21:03.494641 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:21:03.495693 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:21:03.495753 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:21:03.496794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:21:03.496852 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:21:03.503555 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:21:03.504967 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:21:03.505055 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:21:03.507410 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:21:03.507476 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:21:03.509150 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:21:03.510334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:21:03.512704 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:21:03.512851 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:21:03.514114 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:21:03.518496 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:21:03.528649 systemd[1]: Switching root. Mar 14 00:21:03.556119 systemd-journald[179]: Journal stopped Mar 14 00:21:04.983108 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 14 00:21:04.983200 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:21:04.983226 kernel: SELinux: policy capability open_perms=1 Mar 14 00:21:04.983246 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:21:04.985304 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:21:04.985348 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:21:04.985368 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:21:04.985386 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:21:04.985412 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:21:04.985432 kernel: audit: type=1403 audit(1773447663.895:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:21:04.985459 systemd[1]: Successfully loaded SELinux policy in 44.464ms. Mar 14 00:21:04.985488 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.861ms. Mar 14 00:21:04.985512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:21:04.985538 systemd[1]: Detected virtualization amazon. Mar 14 00:21:04.985560 systemd[1]: Detected architecture x86-64. Mar 14 00:21:04.985580 systemd[1]: Detected first boot. Mar 14 00:21:04.985602 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:21:04.985623 zram_generator::config[1453]: No configuration found. Mar 14 00:21:04.985651 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:21:04.985672 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:21:04.985693 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:21:04.985719 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:21:04.985741 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:21:04.985763 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:21:04.985787 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:21:04.985808 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:21:04.985828 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:21:04.985850 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:21:04.985870 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:21:04.985895 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:21:04.985916 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:21:04.985937 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:21:04.985957 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:21:04.985977 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:21:04.985994 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:21:04.986013 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:21:04.986030 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:21:04.986049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:21:04.986074 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:21:04.986095 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:21:04.986115 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:21:04.986136 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:21:04.986157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:21:04.986177 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:21:04.986198 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:21:04.986223 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:21:04.986247 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:21:04.988357 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:21:04.988395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:21:04.988421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:21:04.988444 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:21:04.988469 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:21:04.988489 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:21:04.988512 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:21:04.988535 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:21:04.988562 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:21:04.988585 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:21:04.988603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:21:04.988625 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:21:04.988647 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:21:04.988670 systemd[1]: Reached target machines.target - Containers. Mar 14 00:21:04.988693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:21:04.988712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:21:04.988737 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:21:04.988776 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:21:04.988798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:21:04.988819 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:21:04.988840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:21:04.988862 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:21:04.988883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:21:04.988905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:21:04.988927 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:21:04.988952 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:21:04.988972 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:21:04.988994 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:21:04.989020 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:21:04.989042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:21:04.989064 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:21:04.989085 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:21:04.989106 kernel: fuse: init (API version 7.39) Mar 14 00:21:04.989126 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:21:04.989151 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:21:04.989172 systemd[1]: Stopped verity-setup.service. Mar 14 00:21:04.989196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:21:04.989216 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:21:04.989238 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:21:04.989259 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:21:04.989295 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:21:04.989317 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:21:04.989341 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:21:04.989362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:21:04.989383 kernel: loop: module loaded Mar 14 00:21:04.989403 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:21:04.989425 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:21:04.989449 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:21:04.989470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:21:04.989491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:21:04.989512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:21:04.989533 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:21:04.989553 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:21:04.989575 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:21:04.989596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:21:04.989620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:21:04.989642 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:21:04.989664 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:21:04.989685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:21:04.989742 systemd-journald[1531]: Collecting audit messages is disabled. Mar 14 00:21:04.989789 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:21:04.989814 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:21:04.989839 systemd-journald[1531]: Journal started Mar 14 00:21:04.989875 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec29b4741704542047cd8bf899e55ad5) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:21:04.580096 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:21:04.598470 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:21:04.598993 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:21:04.997286 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:21:04.999247 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:21:05.002569 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:21:05.003378 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:21:05.038805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:21:05.044831 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:21:05.047400 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:21:05.047457 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:21:05.051376 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:21:05.061469 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:21:05.070471 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:21:05.071221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:21:05.081493 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:21:05.085337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:21:05.086374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:21:05.096564 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:21:05.105439 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:21:05.119373 kernel: ACPI: bus type drm_connector registered Mar 14 00:21:05.112714 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:21:05.116343 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:21:05.118178 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:21:05.121510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:21:05.127308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:21:05.129572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:21:05.149656 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec29b4741704542047cd8bf899e55ad5 is 90.956ms for 987 entries. Mar 14 00:21:05.149656 systemd-journald[1531]: System Journal (/var/log/journal/ec29b4741704542047cd8bf899e55ad5) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:21:05.276582 systemd-journald[1531]: Received client request to flush runtime journal. Mar 14 00:21:05.276657 kernel: loop0: detected capacity change from 0 to 217752 Mar 14 00:21:05.158869 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:21:05.160628 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:21:05.166767 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:21:05.179961 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:21:05.234144 udevadm[1586]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:21:05.278372 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:21:05.294114 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:21:05.295334 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:21:05.296619 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:21:05.308466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:21:05.325301 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:21:05.363452 kernel: loop1: detected capacity change from 0 to 140768 Mar 14 00:21:05.376391 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Mar 14 00:21:05.377602 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Mar 14 00:21:05.408678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:21:05.477847 kernel: loop2: detected capacity change from 0 to 142488 Mar 14 00:21:05.550323 kernel: loop3: detected capacity change from 0 to 61336 Mar 14 00:21:05.686290 kernel: loop4: detected capacity change from 0 to 217752 Mar 14 00:21:05.736176 kernel: loop5: detected capacity change from 0 to 140768 Mar 14 00:21:05.786564 kernel: loop6: detected capacity change from 0 to 142488 Mar 14 00:21:05.827299 kernel: loop7: detected capacity change from 0 to 61336 Mar 14 00:21:05.852321 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:21:05.853212 (sd-merge)[1609]: Merged extensions into '/usr'. Mar 14 00:21:05.862981 systemd[1]: Reloading requested from client PID 1585 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:21:05.863001 systemd[1]: Reloading... Mar 14 00:21:06.006301 zram_generator::config[1637]: No configuration found. Mar 14 00:21:06.059416 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:21:06.180201 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:21:06.234257 systemd[1]: Reloading finished in 370 ms. Mar 14 00:21:06.266011 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:21:06.266914 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:21:06.267629 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:21:06.280569 systemd[1]: Starting ensure-sysext.service... Mar 14 00:21:06.284506 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:21:06.289183 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:21:06.303348 systemd[1]: Reloading requested from client PID 1688 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:21:06.303377 systemd[1]: Reloading... Mar 14 00:21:06.341357 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Mar 14 00:21:06.350577 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:21:06.351113 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:21:06.355459 systemd-tmpfiles[1689]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:21:06.355920 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Mar 14 00:21:06.356014 systemd-tmpfiles[1689]: ACLs are not supported, ignoring. Mar 14 00:21:06.361110 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:21:06.362625 systemd-tmpfiles[1689]: Skipping /boot Mar 14 00:21:06.398701 systemd-tmpfiles[1689]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:21:06.398876 systemd-tmpfiles[1689]: Skipping /boot Mar 14 00:21:06.447290 zram_generator::config[1725]: No configuration found. Mar 14 00:21:06.555886 (udev-worker)[1731]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:21:06.659446 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:21:06.667302 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:21:06.672316 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:21:06.678349 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 14 00:21:06.682303 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 14 00:21:06.710709 kernel: ACPI: button: Sleep Button [SLPF] Mar 14 00:21:06.714573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:21:06.734474 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1733) Mar 14 00:21:06.827122 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:21:06.827281 systemd[1]: Reloading finished in 523 ms. Mar 14 00:21:06.846946 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:21:06.849104 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:21:06.896308 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:21:06.928403 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:21:06.938976 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:21:06.951527 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:21:06.954527 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:21:06.955937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:21:06.961385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:21:06.967042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:21:06.968789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:21:06.972905 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:21:06.983652 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:21:06.995058 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:21:07.008405 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:21:07.019393 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:21:07.020856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:21:07.026138 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:21:07.026683 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:21:07.028946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:21:07.029136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:21:07.031606 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:21:07.032195 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:21:07.089552 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:21:07.097989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:21:07.106426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:21:07.114832 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:21:07.115320 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:21:07.121079 augenrules[1908]: No rules Mar 14 00:21:07.124990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:21:07.127725 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:21:07.131619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:21:07.145592 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:21:07.150472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:21:07.151465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:21:07.159329 lvm[1909]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:21:07.158944 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:21:07.160463 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:21:07.176770 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:21:07.180655 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:21:07.184517 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:21:07.186332 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:21:07.199470 systemd[1]: Finished ensure-sysext.service. Mar 14 00:21:07.201986 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:21:07.210675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:21:07.210898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:21:07.215592 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:21:07.226505 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:21:07.228112 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:21:07.230492 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:21:07.233092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:21:07.234034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:21:07.234238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:21:07.235111 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:21:07.235610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:21:07.236454 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:21:07.236624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:21:07.243508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:21:07.248315 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:21:07.252301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:21:07.269564 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:21:07.270346 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:21:07.278615 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:21:07.284900 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:21:07.294110 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:21:07.378123 systemd-networkd[1883]: lo: Link UP Mar 14 00:21:07.378618 systemd-networkd[1883]: lo: Gained carrier Mar 14 00:21:07.380565 systemd-networkd[1883]: Enumeration completed Mar 14 00:21:07.380812 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:21:07.383677 systemd-networkd[1883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:21:07.383795 systemd-networkd[1883]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:21:07.388139 systemd-resolved[1886]: Positive Trust Anchors: Mar 14 00:21:07.388363 systemd-resolved[1886]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:21:07.388415 systemd-resolved[1886]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:21:07.389199 systemd-networkd[1883]: eth0: Link UP Mar 14 00:21:07.389441 systemd-networkd[1883]: eth0: Gained carrier Mar 14 00:21:07.389468 systemd-networkd[1883]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:21:07.390596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:21:07.397185 systemd-resolved[1886]: Defaulting to hostname 'linux'. Mar 14 00:21:07.399862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:21:07.400788 systemd[1]: Reached target network.target - Network. Mar 14 00:21:07.401170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:21:07.401618 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:21:07.402081 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:21:07.402538 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:21:07.403064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:21:07.403538 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:21:07.403904 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:21:07.404260 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:21:07.404354 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:21:07.404367 systemd-networkd[1883]: eth0: DHCPv4 address 172.31.23.47/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:21:07.404796 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:21:07.406581 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:21:07.408388 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:21:07.412502 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:21:07.413549 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:21:07.414027 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:21:07.414463 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:21:07.414944 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:21:07.414983 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:21:07.416084 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:21:07.420456 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:21:07.430470 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:21:07.433662 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:21:07.439412 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:21:07.439967 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:21:07.444623 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:21:07.449574 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:21:07.461454 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:21:07.469488 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:21:07.473800 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:21:07.488980 jq[1951]: false Mar 14 00:21:07.490311 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:21:07.498028 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:21:07.499108 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:21:07.500819 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:21:07.508763 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:21:07.513407 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:21:07.517826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:21:07.519143 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:21:07.599293 jq[1966]: true Mar 14 00:21:07.613231 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:21:07.613545 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:21:07.620314 extend-filesystems[1952]: Found loop4 Mar 14 00:21:07.621723 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:21:07.625753 extend-filesystems[1952]: Found loop5 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found loop6 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found loop7 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p1 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p2 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p3 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found usr Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p4 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p6 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p7 Mar 14 00:21:07.625753 extend-filesystems[1952]: Found nvme0n1p9 Mar 14 00:21:07.625753 extend-filesystems[1952]: Checking size of /dev/nvme0n1p9 Mar 14 00:21:07.623381 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:21:07.654988 tar[1970]: linux-amd64/LICENSE Mar 14 00:21:07.654988 tar[1970]: linux-amd64/helm Mar 14 00:21:07.636804 dbus-daemon[1950]: [system] SELinux support is enabled Mar 14 00:21:07.657662 update_engine[1963]: I20260314 00:21:07.633622 1963 main.cc:92] Flatcar Update Engine starting Mar 14 00:21:07.641889 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:21:07.641777 dbus-daemon[1950]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1883 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:21:07.642759 (ntainerd)[1984]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:21:07.655009 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:21:07.650957 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:21:07.650995 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:21:07.651527 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:21:07.651560 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:21:07.669016 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:21:07.670577 extend-filesystems[1952]: Resized partition /dev/nvme0n1p9 Mar 14 00:21:07.672979 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:21:07.674330 extend-filesystems[1997]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:21:07.679352 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:21:07.688154 update_engine[1963]: I20260314 00:21:07.682567 1963 update_check_scheduler.cc:74] Next update check in 3m34s Mar 14 00:21:07.688255 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:21:07.691285 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:21:07.713697 jq[1987]: true Mar 14 00:21:07.719442 ntpd[1954]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: ---------------------------------------------------- Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: corporation. Support and training for ntp-4 are Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: available at https://www.nwtime.org/support Mar 14 00:21:07.725664 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: ---------------------------------------------------- Mar 14 00:21:07.719472 ntpd[1954]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:21:07.719483 ntpd[1954]: ---------------------------------------------------- Mar 14 00:21:07.719493 ntpd[1954]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:21:07.719504 ntpd[1954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:21:07.719514 ntpd[1954]: corporation. Support and training for ntp-4 are Mar 14 00:21:07.719524 ntpd[1954]: available at https://www.nwtime.org/support Mar 14 00:21:07.719534 ntpd[1954]: ---------------------------------------------------- Mar 14 00:21:07.728373 ntpd[1954]: proto: precision = 0.074 usec (-24) Mar 14 00:21:07.730382 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: proto: precision = 0.074 usec (-24) Mar 14 00:21:07.730641 ntpd[1954]: basedate set to 2026-03-01 Mar 14 00:21:07.732354 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: basedate set to 2026-03-01 Mar 14 00:21:07.732354 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: gps base set to 2026-03-01 (week 2408) Mar 14 00:21:07.730664 ntpd[1954]: gps base set to 2026-03-01 (week 2408) Mar 14 00:21:07.737293 coreos-metadata[1949]: Mar 14 00:21:07.736 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:21:07.737770 ntpd[1954]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:21:07.738058 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:21:07.738156 ntpd[1954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:21:07.738237 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:21:07.738959 ntpd[1954]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:21:07.739710 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:21:07.739821 ntpd[1954]: Listen normally on 3 eth0 172.31.23.47:123 Mar 14 00:21:07.741745 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Listen normally on 3 eth0 172.31.23.47:123 Mar 14 00:21:07.741745 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Listen normally on 4 lo [::1]:123 Mar 14 00:21:07.741745 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: bind(21) AF_INET6 fe80::4c2:c2ff:fe2e:b8d1%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:21:07.741745 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: unable to create socket on eth0 (5) for fe80::4c2:c2ff:fe2e:b8d1%2#123 Mar 14 00:21:07.741745 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: failed to init interface for address fe80::4c2:c2ff:fe2e:b8d1%2 Mar 14 00:21:07.741745 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: Listening on routing socket on fd #21 for interface updates Mar 14 00:21:07.739871 ntpd[1954]: Listen normally on 4 lo [::1]:123 Mar 14 00:21:07.739920 ntpd[1954]: bind(21) AF_INET6 fe80::4c2:c2ff:fe2e:b8d1%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:21:07.739943 ntpd[1954]: unable to create socket on eth0 (5) for fe80::4c2:c2ff:fe2e:b8d1%2#123 Mar 14 00:21:07.739959 ntpd[1954]: failed to init interface for address fe80::4c2:c2ff:fe2e:b8d1%2 Mar 14 00:21:07.739990 ntpd[1954]: Listening on routing socket on fd #21 for interface updates Mar 14 00:21:07.751234 coreos-metadata[1949]: Mar 14 00:21:07.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:21:07.757316 coreos-metadata[1949]: Mar 14 00:21:07.757 INFO Fetch successful Mar 14 00:21:07.757316 coreos-metadata[1949]: Mar 14 00:21:07.757 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:21:07.757455 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:21:07.757455 ntpd[1954]: 14 Mar 00:21:07 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:21:07.757098 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:21:07.757133 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:21:07.758420 coreos-metadata[1949]: Mar 14 00:21:07.757 INFO Fetch successful Mar 14 00:21:07.758420 coreos-metadata[1949]: Mar 14 00:21:07.757 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:21:07.759322 coreos-metadata[1949]: Mar 14 00:21:07.759 INFO Fetch successful Mar 14 00:21:07.759322 coreos-metadata[1949]: Mar 14 00:21:07.759 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:21:07.762170 systemd-logind[1962]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:21:07.764256 coreos-metadata[1949]: Mar 14 00:21:07.763 INFO Fetch successful Mar 14 00:21:07.764256 coreos-metadata[1949]: Mar 14 00:21:07.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:21:07.772312 coreos-metadata[1949]: Mar 14 00:21:07.764 INFO Fetch failed with 404: resource not found Mar 14 00:21:07.772312 coreos-metadata[1949]: Mar 14 00:21:07.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:21:07.767916 systemd-logind[1962]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 14 00:21:07.767988 systemd-logind[1962]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:21:07.775682 coreos-metadata[1949]: Mar 14 00:21:07.773 INFO Fetch successful Mar 14 00:21:07.775682 coreos-metadata[1949]: Mar 14 00:21:07.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:21:07.775682 coreos-metadata[1949]: Mar 14 00:21:07.775 INFO Fetch successful Mar 14 00:21:07.775682 coreos-metadata[1949]: Mar 14 00:21:07.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:21:07.776072 systemd-logind[1962]: New seat seat0. Mar 14 00:21:07.778394 coreos-metadata[1949]: Mar 14 00:21:07.777 INFO Fetch successful Mar 14 00:21:07.778394 coreos-metadata[1949]: Mar 14 00:21:07.777 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:21:07.780422 coreos-metadata[1949]: Mar 14 00:21:07.779 INFO Fetch successful Mar 14 00:21:07.780422 coreos-metadata[1949]: Mar 14 00:21:07.779 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:21:07.781630 coreos-metadata[1949]: Mar 14 00:21:07.781 INFO Fetch successful Mar 14 00:21:07.898852 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1733) Mar 14 00:21:07.950175 bash[2028]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:21:07.952904 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:21:07.978255 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:21:07.997341 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:21:07.994022 systemd[1]: Starting sshkeys.service... Mar 14 00:21:08.024339 extend-filesystems[1997]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:21:08.024339 extend-filesystems[1997]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:21:08.024339 extend-filesystems[1997]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:21:08.061084 extend-filesystems[1952]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:21:08.032418 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:21:08.032652 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:21:08.070767 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:21:08.071957 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:21:08.100763 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:21:08.108723 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:21:08.176157 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:21:08.193544 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:21:08.193736 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:21:08.203844 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:21:08.220693 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:21:08.282003 polkitd[2101]: Started polkitd version 121 Mar 14 00:21:08.313755 polkitd[2101]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:21:08.319528 polkitd[2101]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:21:08.321783 polkitd[2101]: Finished loading, compiling and executing 2 rules Mar 14 00:21:08.332481 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:21:08.333157 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:21:08.335998 polkitd[2101]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:21:08.380507 coreos-metadata[2082]: Mar 14 00:21:08.380 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:21:08.386215 coreos-metadata[2082]: Mar 14 00:21:08.386 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:21:08.390635 coreos-metadata[2082]: Mar 14 00:21:08.390 INFO Fetch successful Mar 14 00:21:08.392754 coreos-metadata[2082]: Mar 14 00:21:08.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:21:08.399127 coreos-metadata[2082]: Mar 14 00:21:08.397 INFO Fetch successful Mar 14 00:21:08.399924 unknown[2082]: wrote ssh authorized keys file for user: core Mar 14 00:21:08.447360 systemd-hostnamed[1993]: Hostname set to (transient) Mar 14 00:21:08.447362 systemd-resolved[1886]: System hostname changed to 'ip-172-31-23-47'. Mar 14 00:21:08.466779 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:21:08.472688 update-ssh-keys[2145]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:21:08.469794 systemd[1]: Finished sshkeys.service. Mar 14 00:21:08.506864 containerd[1984]: time="2026-03-14T00:21:08.506672245Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:21:08.608859 containerd[1984]: time="2026-03-14T00:21:08.607796268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.612471 containerd[1984]: time="2026-03-14T00:21:08.612421681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613043850Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613077412Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613280016Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613308567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613380992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613399448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613617000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613636617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613655143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613671041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613750952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.614763 containerd[1984]: time="2026-03-14T00:21:08.613967834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:21:08.615229 containerd[1984]: time="2026-03-14T00:21:08.614113138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:21:08.615229 containerd[1984]: time="2026-03-14T00:21:08.614132342Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:21:08.615229 containerd[1984]: time="2026-03-14T00:21:08.614220859Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:21:08.615858 containerd[1984]: time="2026-03-14T00:21:08.615836404Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:21:08.620813 containerd[1984]: time="2026-03-14T00:21:08.620783499Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:21:08.620960 containerd[1984]: time="2026-03-14T00:21:08.620945590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622048947Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622082601Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622104929Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622291365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622625683Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622740956Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622770929Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622789887Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622810507Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622829563Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622847185Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622867596Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622887724Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623285 containerd[1984]: time="2026-03-14T00:21:08.622909524Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.622928535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.622946010Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.622974592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.622997610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623014876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623033156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623049355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623067025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623082891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623100125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623123082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623146537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623163456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.623816 containerd[1984]: time="2026-03-14T00:21:08.623180815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.624336 containerd[1984]: time="2026-03-14T00:21:08.623197448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.624336 containerd[1984]: time="2026-03-14T00:21:08.623219483Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:21:08.624336 containerd[1984]: time="2026-03-14T00:21:08.623247022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.625492331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.625529996Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626300375Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626328686Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626435859Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626455640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626481565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626517836Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626533682Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:21:08.626661 containerd[1984]: time="2026-03-14T00:21:08.626550684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:21:08.629297 containerd[1984]: time="2026-03-14T00:21:08.628423683Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:21:08.629297 containerd[1984]: time="2026-03-14T00:21:08.628530656Z" level=info msg="Connect containerd service" Mar 14 00:21:08.629297 containerd[1984]: time="2026-03-14T00:21:08.628597484Z" level=info msg="using legacy CRI server" Mar 14 00:21:08.629297 containerd[1984]: time="2026-03-14T00:21:08.628608554Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:21:08.629297 containerd[1984]: time="2026-03-14T00:21:08.629212735Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.631962479Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.632496880Z" level=info msg="Start subscribing containerd event" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.632558745Z" level=info msg="Start recovering state" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.632635787Z" level=info msg="Start event monitor" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.632658588Z" level=info msg="Start snapshots syncer" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.632671267Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:21:08.633286 containerd[1984]: time="2026-03-14T00:21:08.632681717Z" level=info msg="Start streaming server" Mar 14 00:21:08.634476 containerd[1984]: time="2026-03-14T00:21:08.634448662Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:21:08.634626 containerd[1984]: time="2026-03-14T00:21:08.634603230Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:21:08.635384 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:21:08.646131 containerd[1984]: time="2026-03-14T00:21:08.645401693Z" level=info msg="containerd successfully booted in 0.141913s" Mar 14 00:21:08.719919 ntpd[1954]: bind(24) AF_INET6 fe80::4c2:c2ff:fe2e:b8d1%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:21:08.720497 ntpd[1954]: 14 Mar 00:21:08 ntpd[1954]: bind(24) AF_INET6 fe80::4c2:c2ff:fe2e:b8d1%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:21:08.720596 ntpd[1954]: unable to create socket on eth0 (6) for fe80::4c2:c2ff:fe2e:b8d1%2#123 Mar 14 00:21:08.720742 ntpd[1954]: 14 Mar 00:21:08 ntpd[1954]: unable to create socket on eth0 (6) for fe80::4c2:c2ff:fe2e:b8d1%2#123 Mar 14 00:21:08.720794 ntpd[1954]: failed to init interface for address fe80::4c2:c2ff:fe2e:b8d1%2 Mar 14 00:21:08.720857 ntpd[1954]: 14 Mar 00:21:08 ntpd[1954]: failed to init interface for address fe80::4c2:c2ff:fe2e:b8d1%2 Mar 14 00:21:08.760400 systemd-networkd[1883]: eth0: Gained IPv6LL Mar 14 00:21:08.766064 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:21:08.768157 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:21:08.773623 sshd_keygen[2001]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:21:08.778682 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:21:08.781913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:08.788424 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:21:08.856788 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:21:08.874171 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:21:08.884058 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:21:08.903043 amazon-ssm-agent[2154]: Initializing new seelog logger Mar 14 00:21:08.903414 amazon-ssm-agent[2154]: New Seelog Logger Creation Complete Mar 14 00:21:08.903414 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.903414 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.904617 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 processing appconfig overrides Mar 14 00:21:08.904617 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.904617 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.904617 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 processing appconfig overrides Mar 14 00:21:08.906949 amazon-ssm-agent[2154]: 2026-03-14 00:21:08 INFO Proxy environment variables: Mar 14 00:21:08.906949 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.906949 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.906949 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 processing appconfig overrides Mar 14 00:21:08.910320 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.910320 amazon-ssm-agent[2154]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:21:08.910320 amazon-ssm-agent[2154]: 2026/03/14 00:21:08 processing appconfig overrides Mar 14 00:21:08.919105 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:21:08.919533 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:21:08.930671 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:21:08.977946 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:21:08.989474 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:21:09.003724 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:21:09.005641 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:21:09.007580 amazon-ssm-agent[2154]: 2026-03-14 00:21:08 INFO http_proxy: Mar 14 00:21:09.109720 amazon-ssm-agent[2154]: 2026-03-14 00:21:08 INFO no_proxy: Mar 14 00:21:09.192290 tar[1970]: linux-amd64/README.md Mar 14 00:21:09.207793 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:21:09.208477 amazon-ssm-agent[2154]: 2026-03-14 00:21:08 INFO https_proxy: Mar 14 00:21:09.278973 amazon-ssm-agent[2154]: 2026-03-14 00:21:08 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:21:09.278973 amazon-ssm-agent[2154]: 2026-03-14 00:21:08 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:21:09.278973 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO Agent will take identity from EC2 Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [Registrar] Starting registrar module Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:21:09.279158 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:21:09.307278 amazon-ssm-agent[2154]: 2026-03-14 00:21:09 INFO [CredentialRefresher] Next credential rotation will be in 31.33331568765 minutes Mar 14 00:21:10.293756 amazon-ssm-agent[2154]: 2026-03-14 00:21:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:21:10.394379 amazon-ssm-agent[2154]: 2026-03-14 00:21:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started Mar 14 00:21:10.495453 amazon-ssm-agent[2154]: 2026-03-14 00:21:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:21:10.935637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:10.937445 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:21:10.940393 systemd[1]: Startup finished in 600ms (kernel) + 6.199s (initrd) + 7.086s (userspace) = 13.886s. Mar 14 00:21:10.942395 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:21:11.568118 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:21:11.576673 systemd[1]: Started sshd@0-172.31.23.47:22-68.220.241.50:39670.service - OpenSSH per-connection server daemon (68.220.241.50:39670). Mar 14 00:21:11.719888 ntpd[1954]: Listen normally on 7 eth0 [fe80::4c2:c2ff:fe2e:b8d1%2]:123 Mar 14 00:21:11.720255 ntpd[1954]: 14 Mar 00:21:11 ntpd[1954]: Listen normally on 7 eth0 [fe80::4c2:c2ff:fe2e:b8d1%2]:123 Mar 14 00:21:11.857106 kubelet[2208]: E0314 00:21:11.856978 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:21:11.859614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:21:11.859841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:21:12.072811 sshd[2218]: Accepted publickey for core from 68.220.241.50 port 39670 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:12.075478 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:12.085222 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:21:12.090924 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:21:12.095396 systemd-logind[1962]: New session 1 of user core. Mar 14 00:21:12.107640 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:21:12.122760 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:21:12.126592 (systemd)[2224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:21:12.240796 systemd[2224]: Queued start job for default target default.target. Mar 14 00:21:12.252653 systemd[2224]: Created slice app.slice - User Application Slice. Mar 14 00:21:12.252696 systemd[2224]: Reached target paths.target - Paths. Mar 14 00:21:12.252717 systemd[2224]: Reached target timers.target - Timers. Mar 14 00:21:12.254137 systemd[2224]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:21:12.266387 systemd[2224]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:21:12.266588 systemd[2224]: Reached target sockets.target - Sockets. Mar 14 00:21:12.266611 systemd[2224]: Reached target basic.target - Basic System. Mar 14 00:21:12.266661 systemd[2224]: Reached target default.target - Main User Target. Mar 14 00:21:12.266701 systemd[2224]: Startup finished in 133ms. Mar 14 00:21:12.266985 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:21:12.271455 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:21:12.632799 systemd[1]: Started sshd@1-172.31.23.47:22-68.220.241.50:39672.service - OpenSSH per-connection server daemon (68.220.241.50:39672). Mar 14 00:21:13.112217 sshd[2235]: Accepted publickey for core from 68.220.241.50 port 39672 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:13.113754 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:13.119032 systemd-logind[1962]: New session 2 of user core. Mar 14 00:21:13.128532 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:21:13.461021 sshd[2235]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:13.465431 systemd[1]: sshd@1-172.31.23.47:22-68.220.241.50:39672.service: Deactivated successfully. Mar 14 00:21:13.467458 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:21:13.468126 systemd-logind[1962]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:21:13.469130 systemd-logind[1962]: Removed session 2. Mar 14 00:21:13.552501 systemd[1]: Started sshd@2-172.31.23.47:22-68.220.241.50:39688.service - OpenSSH per-connection server daemon (68.220.241.50:39688). Mar 14 00:21:14.036546 sshd[2242]: Accepted publickey for core from 68.220.241.50 port 39688 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:14.038508 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:14.043792 systemd-logind[1962]: New session 3 of user core. Mar 14 00:21:14.051500 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:21:14.379608 sshd[2242]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:14.383903 systemd[1]: sshd@2-172.31.23.47:22-68.220.241.50:39688.service: Deactivated successfully. Mar 14 00:21:14.386132 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:21:14.387015 systemd-logind[1962]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:21:14.388001 systemd-logind[1962]: Removed session 3. Mar 14 00:21:14.470681 systemd[1]: Started sshd@3-172.31.23.47:22-68.220.241.50:39698.service - OpenSSH per-connection server daemon (68.220.241.50:39698). Mar 14 00:21:16.909294 systemd-resolved[1886]: Clock change detected. Flushing caches. Mar 14 00:21:17.143956 sshd[2249]: Accepted publickey for core from 68.220.241.50 port 39698 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:17.144675 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:17.149705 systemd-logind[1962]: New session 4 of user core. Mar 14 00:21:17.155759 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:21:17.496958 sshd[2249]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:17.501917 systemd-logind[1962]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:21:17.502200 systemd[1]: sshd@3-172.31.23.47:22-68.220.241.50:39698.service: Deactivated successfully. Mar 14 00:21:17.504264 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:21:17.505188 systemd-logind[1962]: Removed session 4. Mar 14 00:21:17.583711 systemd[1]: Started sshd@4-172.31.23.47:22-68.220.241.50:39710.service - OpenSSH per-connection server daemon (68.220.241.50:39710). Mar 14 00:21:18.073522 sshd[2256]: Accepted publickey for core from 68.220.241.50 port 39710 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:18.074967 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:18.080107 systemd-logind[1962]: New session 5 of user core. Mar 14 00:21:18.087776 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:21:18.363353 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:21:18.363886 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:21:18.376096 sudo[2259]: pam_unix(sudo:session): session closed for user root Mar 14 00:21:18.454301 sshd[2256]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:18.458195 systemd[1]: sshd@4-172.31.23.47:22-68.220.241.50:39710.service: Deactivated successfully. Mar 14 00:21:18.460192 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:21:18.461897 systemd-logind[1962]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:21:18.463175 systemd-logind[1962]: Removed session 5. Mar 14 00:21:18.543785 systemd[1]: Started sshd@5-172.31.23.47:22-68.220.241.50:39720.service - OpenSSH per-connection server daemon (68.220.241.50:39720). Mar 14 00:21:19.020831 sshd[2264]: Accepted publickey for core from 68.220.241.50 port 39720 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:19.022349 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:19.027442 systemd-logind[1962]: New session 6 of user core. Mar 14 00:21:19.036626 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:21:19.293488 sudo[2268]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:21:19.293878 sudo[2268]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:21:19.297884 sudo[2268]: pam_unix(sudo:session): session closed for user root Mar 14 00:21:19.303254 sudo[2267]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:21:19.303705 sudo[2267]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:21:19.324811 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:21:19.326876 auditctl[2271]: No rules Mar 14 00:21:19.327284 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:21:19.327529 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:21:19.330769 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:21:19.378497 augenrules[2289]: No rules Mar 14 00:21:19.380118 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:21:19.381207 sudo[2267]: pam_unix(sudo:session): session closed for user root Mar 14 00:21:19.458198 sshd[2264]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:19.461713 systemd[1]: sshd@5-172.31.23.47:22-68.220.241.50:39720.service: Deactivated successfully. Mar 14 00:21:19.463897 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:21:19.465387 systemd-logind[1962]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:21:19.466633 systemd-logind[1962]: Removed session 6. Mar 14 00:21:19.547834 systemd[1]: Started sshd@6-172.31.23.47:22-68.220.241.50:39724.service - OpenSSH per-connection server daemon (68.220.241.50:39724). Mar 14 00:21:20.024586 sshd[2297]: Accepted publickey for core from 68.220.241.50 port 39724 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:21:20.026042 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:21:20.030481 systemd-logind[1962]: New session 7 of user core. Mar 14 00:21:20.039686 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:21:20.297110 sudo[2300]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:21:20.297510 sudo[2300]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:21:20.658766 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:21:20.661107 (dockerd)[2316]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:21:21.024797 dockerd[2316]: time="2026-03-14T00:21:21.024748927Z" level=info msg="Starting up" Mar 14 00:21:21.223768 dockerd[2316]: time="2026-03-14T00:21:21.223718410Z" level=info msg="Loading containers: start." Mar 14 00:21:21.367662 kernel: Initializing XFRM netlink socket Mar 14 00:21:21.395161 (udev-worker)[2338]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:21:21.457799 systemd-networkd[1883]: docker0: Link UP Mar 14 00:21:21.494882 dockerd[2316]: time="2026-03-14T00:21:21.494837326Z" level=info msg="Loading containers: done." Mar 14 00:21:21.517342 dockerd[2316]: time="2026-03-14T00:21:21.517252118Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:21:21.517641 dockerd[2316]: time="2026-03-14T00:21:21.517436182Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:21:21.517641 dockerd[2316]: time="2026-03-14T00:21:21.517579294Z" level=info msg="Daemon has completed initialization" Mar 14 00:21:21.564174 dockerd[2316]: time="2026-03-14T00:21:21.564103929Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:21:21.564531 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:21:22.440861 containerd[1984]: time="2026-03-14T00:21:22.440814026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:21:23.004120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700035698.mount: Deactivated successfully. Mar 14 00:21:24.232062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:21:24.240517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:24.489980 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:21:24.490001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:24.568925 kubelet[2520]: E0314 00:21:24.568792 2520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:21:24.574919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:21:24.575128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:21:24.884408 containerd[1984]: time="2026-03-14T00:21:24.884257337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:24.886423 containerd[1984]: time="2026-03-14T00:21:24.886237147Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 14 00:21:24.888677 containerd[1984]: time="2026-03-14T00:21:24.888632840Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:24.893002 containerd[1984]: time="2026-03-14T00:21:24.892755411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:24.894094 containerd[1984]: time="2026-03-14T00:21:24.893881195Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.453024919s" Mar 14 00:21:24.894094 containerd[1984]: time="2026-03-14T00:21:24.893928381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:21:24.894808 containerd[1984]: time="2026-03-14T00:21:24.894779419Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:21:26.946842 containerd[1984]: time="2026-03-14T00:21:26.946790165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:26.956902 containerd[1984]: time="2026-03-14T00:21:26.956827295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 14 00:21:26.965949 containerd[1984]: time="2026-03-14T00:21:26.965651675Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:26.969062 containerd[1984]: time="2026-03-14T00:21:26.969016800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:26.970252 containerd[1984]: time="2026-03-14T00:21:26.970211944Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.075397879s" Mar 14 00:21:26.970335 containerd[1984]: time="2026-03-14T00:21:26.970256654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:21:26.971266 containerd[1984]: time="2026-03-14T00:21:26.971072225Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:21:28.295784 containerd[1984]: time="2026-03-14T00:21:28.295729558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:28.297091 containerd[1984]: time="2026-03-14T00:21:28.297043721Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 14 00:21:28.298298 containerd[1984]: time="2026-03-14T00:21:28.298233938Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:28.301415 containerd[1984]: time="2026-03-14T00:21:28.301336464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:28.302665 containerd[1984]: time="2026-03-14T00:21:28.302524279Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.331417096s" Mar 14 00:21:28.302665 containerd[1984]: time="2026-03-14T00:21:28.302565016Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:21:28.303438 containerd[1984]: time="2026-03-14T00:21:28.303238530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:21:29.611217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427878287.mount: Deactivated successfully. Mar 14 00:21:29.995786 containerd[1984]: time="2026-03-14T00:21:29.995733304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:29.996961 containerd[1984]: time="2026-03-14T00:21:29.996827209Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 14 00:21:29.998229 containerd[1984]: time="2026-03-14T00:21:29.998100704Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:30.000409 containerd[1984]: time="2026-03-14T00:21:30.000324320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:30.001329 containerd[1984]: time="2026-03-14T00:21:30.001055845Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.697771981s" Mar 14 00:21:30.001329 containerd[1984]: time="2026-03-14T00:21:30.001100150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:21:30.001871 containerd[1984]: time="2026-03-14T00:21:30.001844821Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:21:30.474066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845299792.mount: Deactivated successfully. Mar 14 00:21:31.967924 containerd[1984]: time="2026-03-14T00:21:31.967867360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:31.969301 containerd[1984]: time="2026-03-14T00:21:31.969256023Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 14 00:21:31.970288 containerd[1984]: time="2026-03-14T00:21:31.970230056Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:31.973236 containerd[1984]: time="2026-03-14T00:21:31.973184166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:31.974559 containerd[1984]: time="2026-03-14T00:21:31.974416946Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.972518426s" Mar 14 00:21:31.974559 containerd[1984]: time="2026-03-14T00:21:31.974457172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:21:31.975272 containerd[1984]: time="2026-03-14T00:21:31.975247087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:21:32.445988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount361480520.mount: Deactivated successfully. Mar 14 00:21:32.452047 containerd[1984]: time="2026-03-14T00:21:32.452001568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:32.453084 containerd[1984]: time="2026-03-14T00:21:32.452920174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:21:32.455022 containerd[1984]: time="2026-03-14T00:21:32.454239310Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:32.456802 containerd[1984]: time="2026-03-14T00:21:32.456754793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:32.457692 containerd[1984]: time="2026-03-14T00:21:32.457516402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 482.235945ms" Mar 14 00:21:32.457692 containerd[1984]: time="2026-03-14T00:21:32.457554791Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:21:32.458594 containerd[1984]: time="2026-03-14T00:21:32.458432760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:21:32.961109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3508904430.mount: Deactivated successfully. Mar 14 00:21:34.129853 containerd[1984]: time="2026-03-14T00:21:34.129789520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:34.131201 containerd[1984]: time="2026-03-14T00:21:34.131051931Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 14 00:21:34.132708 containerd[1984]: time="2026-03-14T00:21:34.132300279Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:34.137754 containerd[1984]: time="2026-03-14T00:21:34.135234750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:21:34.137754 containerd[1984]: time="2026-03-14T00:21:34.137433672Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.678969869s" Mar 14 00:21:34.137754 containerd[1984]: time="2026-03-14T00:21:34.137471112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:21:34.732008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:21:34.740689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:34.991587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:34.999810 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:21:35.066845 kubelet[2691]: E0314 00:21:35.066802 2691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:21:35.069623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:21:35.069812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:21:35.756468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:35.769835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:35.805774 systemd[1]: Reloading requested from client PID 2704 ('systemctl') (unit session-7.scope)... Mar 14 00:21:35.805794 systemd[1]: Reloading... Mar 14 00:21:35.944419 zram_generator::config[2745]: No configuration found. Mar 14 00:21:36.091683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:21:36.178545 systemd[1]: Reloading finished in 372 ms. Mar 14 00:21:36.236424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:36.241943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:36.244745 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:21:36.245007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:36.250842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:36.444941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:36.452924 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:21:36.503805 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:21:36.814226 kubelet[2810]: I0314 00:21:36.814167 2810 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:21:36.814226 kubelet[2810]: I0314 00:21:36.814210 2810 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:21:36.814226 kubelet[2810]: I0314 00:21:36.814228 2810 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:21:36.814226 kubelet[2810]: I0314 00:21:36.814235 2810 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:21:36.817421 kubelet[2810]: I0314 00:21:36.816814 2810 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:21:36.827922 kubelet[2810]: I0314 00:21:36.827889 2810 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:21:36.834549 kubelet[2810]: E0314 00:21:36.834506 2810 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.47:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:21:36.847355 kubelet[2810]: E0314 00:21:36.847310 2810 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:21:36.847562 kubelet[2810]: I0314 00:21:36.847385 2810 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:21:36.856198 kubelet[2810]: I0314 00:21:36.856165 2810 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:21:36.861193 kubelet[2810]: I0314 00:21:36.861137 2810 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:21:36.862990 kubelet[2810]: I0314 00:21:36.861191 2810 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:21:36.863149 kubelet[2810]: I0314 00:21:36.862992 2810 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:21:36.863149 kubelet[2810]: I0314 00:21:36.863007 2810 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:21:36.863149 kubelet[2810]: I0314 00:21:36.863129 2810 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:21:36.865068 kubelet[2810]: I0314 00:21:36.865045 2810 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:21:36.865253 kubelet[2810]: I0314 00:21:36.865235 2810 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:21:36.865325 kubelet[2810]: I0314 00:21:36.865260 2810 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:21:36.865325 kubelet[2810]: I0314 00:21:36.865295 2810 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:21:36.865325 kubelet[2810]: I0314 00:21:36.865308 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:21:36.868784 kubelet[2810]: I0314 00:21:36.868562 2810 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:21:36.872484 kubelet[2810]: I0314 00:21:36.871491 2810 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:21:36.872484 kubelet[2810]: I0314 00:21:36.871547 2810 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:21:36.872484 kubelet[2810]: W0314 00:21:36.871619 2810 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:21:36.876310 kubelet[2810]: I0314 00:21:36.875944 2810 server.go:1257] "Started kubelet" Mar 14 00:21:36.876639 kubelet[2810]: I0314 00:21:36.876606 2810 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:21:36.877708 kubelet[2810]: I0314 00:21:36.877682 2810 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:21:36.891596 kubelet[2810]: I0314 00:21:36.890958 2810 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:21:36.891596 kubelet[2810]: I0314 00:21:36.891045 2810 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:21:36.891596 kubelet[2810]: I0314 00:21:36.891337 2810 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:21:36.893307 kubelet[2810]: I0314 00:21:36.893084 2810 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:21:36.894291 kubelet[2810]: I0314 00:21:36.894071 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:21:36.897381 kubelet[2810]: I0314 00:21:36.897348 2810 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:21:36.897631 kubelet[2810]: E0314 00:21:36.897610 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:36.901470 kubelet[2810]: I0314 00:21:36.901445 2810 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:21:36.902503 kubelet[2810]: I0314 00:21:36.901502 2810 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:21:36.902503 kubelet[2810]: E0314 00:21:36.901988 2810 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-47?timeout=10s\": dial tcp 172.31.23.47:6443: connect: connection refused" interval="200ms" Mar 14 00:21:36.902785 kubelet[2810]: E0314 00:21:36.900564 2810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.47:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-47.189c8d48c005f214 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-47,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-47,},FirstTimestamp:2026-03-14 00:21:36.875909652 +0000 UTC m=+0.418316987,LastTimestamp:2026-03-14 00:21:36.875909652 +0000 UTC m=+0.418316987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-47,}" Mar 14 00:21:36.904571 kubelet[2810]: I0314 00:21:36.904552 2810 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:21:36.904787 kubelet[2810]: I0314 00:21:36.904769 2810 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:21:36.910035 kubelet[2810]: I0314 00:21:36.910008 2810 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:21:36.928102 kubelet[2810]: I0314 00:21:36.928054 2810 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:21:36.930357 kubelet[2810]: I0314 00:21:36.930327 2810 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:21:36.930357 kubelet[2810]: I0314 00:21:36.930356 2810 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:21:36.930515 kubelet[2810]: I0314 00:21:36.930384 2810 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:21:36.930515 kubelet[2810]: E0314 00:21:36.930467 2810 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:21:36.947686 kubelet[2810]: I0314 00:21:36.947665 2810 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:21:36.947872 kubelet[2810]: I0314 00:21:36.947860 2810 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:21:36.948141 kubelet[2810]: I0314 00:21:36.947943 2810 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:21:36.950441 kubelet[2810]: I0314 00:21:36.950421 2810 policy_none.go:50] "Start" Mar 14 00:21:36.950441 kubelet[2810]: I0314 00:21:36.950441 2810 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:21:36.950588 kubelet[2810]: I0314 00:21:36.950455 2810 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:21:36.952809 kubelet[2810]: I0314 00:21:36.952783 2810 policy_none.go:44] "Start" Mar 14 00:21:36.957868 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:21:36.965728 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:21:36.969303 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:21:36.980663 kubelet[2810]: E0314 00:21:36.980371 2810 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:21:36.980663 kubelet[2810]: I0314 00:21:36.980646 2810 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:21:36.980853 kubelet[2810]: I0314 00:21:36.980661 2810 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:21:36.981044 kubelet[2810]: I0314 00:21:36.981025 2810 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:21:36.982899 kubelet[2810]: E0314 00:21:36.982819 2810 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:21:36.982899 kubelet[2810]: E0314 00:21:36.982869 2810 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-47\" not found" Mar 14 00:21:37.043100 systemd[1]: Created slice kubepods-burstable-pod61e45e89fa668fa36222cca7258361d2.slice - libcontainer container kubepods-burstable-pod61e45e89fa668fa36222cca7258361d2.slice. Mar 14 00:21:37.065778 kubelet[2810]: E0314 00:21:37.065345 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:37.071129 systemd[1]: Created slice kubepods-burstable-pode8c3f196db3d0828eaf902542551d9f5.slice - libcontainer container kubepods-burstable-pode8c3f196db3d0828eaf902542551d9f5.slice. Mar 14 00:21:37.074617 kubelet[2810]: E0314 00:21:37.073879 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:37.076491 systemd[1]: Created slice kubepods-burstable-podc0a23b7c2b8b5513c1b971f079267b86.slice - libcontainer container kubepods-burstable-podc0a23b7c2b8b5513c1b971f079267b86.slice. Mar 14 00:21:37.078333 kubelet[2810]: E0314 00:21:37.078306 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:37.082628 kubelet[2810]: I0314 00:21:37.082601 2810 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-47" Mar 14 00:21:37.082975 kubelet[2810]: E0314 00:21:37.082941 2810 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.47:6443/api/v1/nodes\": dial tcp 172.31.23.47:6443: connect: connection refused" node="ip-172-31-23-47" Mar 14 00:21:37.102684 kubelet[2810]: E0314 00:21:37.102636 2810 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-47?timeout=10s\": dial tcp 172.31.23.47:6443: connect: connection refused" interval="400ms" Mar 14 00:21:37.203144 kubelet[2810]: I0314 00:21:37.203095 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c3f196db3d0828eaf902542551d9f5-ca-certs\") pod \"kube-apiserver-ip-172-31-23-47\" (UID: \"e8c3f196db3d0828eaf902542551d9f5\") " pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:37.203144 kubelet[2810]: I0314 00:21:37.203143 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:37.203355 kubelet[2810]: I0314 00:21:37.203167 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:37.203355 kubelet[2810]: I0314 00:21:37.203189 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:37.203355 kubelet[2810]: I0314 00:21:37.203208 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61e45e89fa668fa36222cca7258361d2-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-47\" (UID: \"61e45e89fa668fa36222cca7258361d2\") " pod="kube-system/kube-scheduler-ip-172-31-23-47" Mar 14 00:21:37.203355 kubelet[2810]: I0314 00:21:37.203227 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c3f196db3d0828eaf902542551d9f5-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-47\" (UID: \"e8c3f196db3d0828eaf902542551d9f5\") " pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:37.203355 kubelet[2810]: I0314 00:21:37.203249 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c3f196db3d0828eaf902542551d9f5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-47\" (UID: \"e8c3f196db3d0828eaf902542551d9f5\") " pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:37.203629 kubelet[2810]: I0314 00:21:37.203272 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:37.203629 kubelet[2810]: I0314 00:21:37.203293 2810 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:37.284775 kubelet[2810]: I0314 00:21:37.284746 2810 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-47" Mar 14 00:21:37.285148 kubelet[2810]: E0314 00:21:37.285117 2810 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.47:6443/api/v1/nodes\": dial tcp 172.31.23.47:6443: connect: connection refused" node="ip-172-31-23-47" Mar 14 00:21:37.369446 containerd[1984]: time="2026-03-14T00:21:37.369311650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-47,Uid:61e45e89fa668fa36222cca7258361d2,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:37.382922 containerd[1984]: time="2026-03-14T00:21:37.382642776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-47,Uid:c0a23b7c2b8b5513c1b971f079267b86,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:37.382922 containerd[1984]: time="2026-03-14T00:21:37.382642813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-47,Uid:e8c3f196db3d0828eaf902542551d9f5,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:37.503866 kubelet[2810]: E0314 00:21:37.503818 2810 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-47?timeout=10s\": dial tcp 172.31.23.47:6443: connect: connection refused" interval="800ms" Mar 14 00:21:37.687147 kubelet[2810]: I0314 00:21:37.686711 2810 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-47" Mar 14 00:21:37.687147 kubelet[2810]: E0314 00:21:37.687061 2810 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.47:6443/api/v1/nodes\": dial tcp 172.31.23.47:6443: connect: connection refused" node="ip-172-31-23-47" Mar 14 00:21:37.830279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3739195767.mount: Deactivated successfully. Mar 14 00:21:37.838510 containerd[1984]: time="2026-03-14T00:21:37.838460021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:37.839552 containerd[1984]: time="2026-03-14T00:21:37.839411406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:21:37.840610 containerd[1984]: time="2026-03-14T00:21:37.840574234Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:37.841549 containerd[1984]: time="2026-03-14T00:21:37.841515081Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:37.849418 containerd[1984]: time="2026-03-14T00:21:37.848600819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:21:37.849540 containerd[1984]: time="2026-03-14T00:21:37.848754786Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:37.852213 containerd[1984]: time="2026-03-14T00:21:37.851902425Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:21:37.854225 containerd[1984]: time="2026-03-14T00:21:37.853804318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:21:37.855520 containerd[1984]: time="2026-03-14T00:21:37.855109898Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.240749ms" Mar 14 00:21:37.856261 containerd[1984]: time="2026-03-14T00:21:37.856223509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.825918ms" Mar 14 00:21:37.858408 containerd[1984]: time="2026-03-14T00:21:37.858274832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.515577ms" Mar 14 00:21:38.053316 containerd[1984]: time="2026-03-14T00:21:38.053024482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:38.053316 containerd[1984]: time="2026-03-14T00:21:38.053125420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:38.053316 containerd[1984]: time="2026-03-14T00:21:38.053149001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:38.053316 containerd[1984]: time="2026-03-14T00:21:38.053253441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:38.064126 containerd[1984]: time="2026-03-14T00:21:38.063778931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:38.064126 containerd[1984]: time="2026-03-14T00:21:38.063856679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:38.064126 containerd[1984]: time="2026-03-14T00:21:38.063882525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:38.069209 containerd[1984]: time="2026-03-14T00:21:38.066580535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:38.069209 containerd[1984]: time="2026-03-14T00:21:38.066649320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:38.069209 containerd[1984]: time="2026-03-14T00:21:38.066673393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:38.069209 containerd[1984]: time="2026-03-14T00:21:38.066816914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:38.069209 containerd[1984]: time="2026-03-14T00:21:38.066321928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:38.094036 systemd[1]: Started cri-containerd-9bd47c01d726fb70045fefcedcf3fcffee9f254941fb08ac197d4320921e94f1.scope - libcontainer container 9bd47c01d726fb70045fefcedcf3fcffee9f254941fb08ac197d4320921e94f1. Mar 14 00:21:38.123690 systemd[1]: Started cri-containerd-1303c3623a5f4916c8d82a053c687c56952115430262e319af878717af4371c8.scope - libcontainer container 1303c3623a5f4916c8d82a053c687c56952115430262e319af878717af4371c8. Mar 14 00:21:38.126892 systemd[1]: Started cri-containerd-78f3f857f520e899598cb25ee50257c4b92079b0fa5b0d55f91d727386e974f7.scope - libcontainer container 78f3f857f520e899598cb25ee50257c4b92079b0fa5b0d55f91d727386e974f7. Mar 14 00:21:38.210713 containerd[1984]: time="2026-03-14T00:21:38.210054615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-47,Uid:c0a23b7c2b8b5513c1b971f079267b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"1303c3623a5f4916c8d82a053c687c56952115430262e319af878717af4371c8\"" Mar 14 00:21:38.219207 containerd[1984]: time="2026-03-14T00:21:38.219083208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-47,Uid:e8c3f196db3d0828eaf902542551d9f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"78f3f857f520e899598cb25ee50257c4b92079b0fa5b0d55f91d727386e974f7\"" Mar 14 00:21:38.238729 containerd[1984]: time="2026-03-14T00:21:38.238281406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-47,Uid:61e45e89fa668fa36222cca7258361d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bd47c01d726fb70045fefcedcf3fcffee9f254941fb08ac197d4320921e94f1\"" Mar 14 00:21:38.243714 containerd[1984]: time="2026-03-14T00:21:38.243665433Z" level=info msg="CreateContainer within sandbox \"78f3f857f520e899598cb25ee50257c4b92079b0fa5b0d55f91d727386e974f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:21:38.246831 containerd[1984]: time="2026-03-14T00:21:38.246733903Z" level=info msg="CreateContainer within sandbox \"1303c3623a5f4916c8d82a053c687c56952115430262e319af878717af4371c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:21:38.250737 containerd[1984]: time="2026-03-14T00:21:38.250558358Z" level=info msg="CreateContainer within sandbox \"9bd47c01d726fb70045fefcedcf3fcffee9f254941fb08ac197d4320921e94f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:21:38.299343 containerd[1984]: time="2026-03-14T00:21:38.299294504Z" level=info msg="CreateContainer within sandbox \"1303c3623a5f4916c8d82a053c687c56952115430262e319af878717af4371c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"19ce4e53e7eed8e417fda6df3af53e5380641586bbd4fa03adcd4ccd1400d554\"" Mar 14 00:21:38.302309 containerd[1984]: time="2026-03-14T00:21:38.302032384Z" level=info msg="CreateContainer within sandbox \"78f3f857f520e899598cb25ee50257c4b92079b0fa5b0d55f91d727386e974f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a716b200eb14510732b21ecdd10dd3f37df93a2458702e8d7fb97956ee2db2e3\"" Mar 14 00:21:38.302663 containerd[1984]: time="2026-03-14T00:21:38.302632733Z" level=info msg="StartContainer for \"a716b200eb14510732b21ecdd10dd3f37df93a2458702e8d7fb97956ee2db2e3\"" Mar 14 00:21:38.306336 kubelet[2810]: E0314 00:21:38.305065 2810 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-47?timeout=10s\": dial tcp 172.31.23.47:6443: connect: connection refused" interval="1.6s" Mar 14 00:21:38.309075 containerd[1984]: time="2026-03-14T00:21:38.307375791Z" level=info msg="CreateContainer within sandbox \"9bd47c01d726fb70045fefcedcf3fcffee9f254941fb08ac197d4320921e94f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"db490b21abaa712bc1f9923e3a09693f3e4c49253bba3e2879ed0e4965cfc95b\"" Mar 14 00:21:38.309075 containerd[1984]: time="2026-03-14T00:21:38.307635345Z" level=info msg="StartContainer for \"19ce4e53e7eed8e417fda6df3af53e5380641586bbd4fa03adcd4ccd1400d554\"" Mar 14 00:21:38.316177 containerd[1984]: time="2026-03-14T00:21:38.316123364Z" level=info msg="StartContainer for \"db490b21abaa712bc1f9923e3a09693f3e4c49253bba3e2879ed0e4965cfc95b\"" Mar 14 00:21:38.346774 systemd[1]: Started cri-containerd-a716b200eb14510732b21ecdd10dd3f37df93a2458702e8d7fb97956ee2db2e3.scope - libcontainer container a716b200eb14510732b21ecdd10dd3f37df93a2458702e8d7fb97956ee2db2e3. Mar 14 00:21:38.384653 systemd[1]: Started cri-containerd-19ce4e53e7eed8e417fda6df3af53e5380641586bbd4fa03adcd4ccd1400d554.scope - libcontainer container 19ce4e53e7eed8e417fda6df3af53e5380641586bbd4fa03adcd4ccd1400d554. Mar 14 00:21:38.387547 systemd[1]: Started cri-containerd-db490b21abaa712bc1f9923e3a09693f3e4c49253bba3e2879ed0e4965cfc95b.scope - libcontainer container db490b21abaa712bc1f9923e3a09693f3e4c49253bba3e2879ed0e4965cfc95b. Mar 14 00:21:38.477184 containerd[1984]: time="2026-03-14T00:21:38.477136023Z" level=info msg="StartContainer for \"a716b200eb14510732b21ecdd10dd3f37df93a2458702e8d7fb97956ee2db2e3\" returns successfully" Mar 14 00:21:38.478497 containerd[1984]: time="2026-03-14T00:21:38.477270735Z" level=info msg="StartContainer for \"19ce4e53e7eed8e417fda6df3af53e5380641586bbd4fa03adcd4ccd1400d554\" returns successfully" Mar 14 00:21:38.490125 kubelet[2810]: I0314 00:21:38.490092 2810 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-47" Mar 14 00:21:38.490968 kubelet[2810]: E0314 00:21:38.490933 2810 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.23.47:6443/api/v1/nodes\": dial tcp 172.31.23.47:6443: connect: connection refused" node="ip-172-31-23-47" Mar 14 00:21:38.500763 containerd[1984]: time="2026-03-14T00:21:38.500720102Z" level=info msg="StartContainer for \"db490b21abaa712bc1f9923e3a09693f3e4c49253bba3e2879ed0e4965cfc95b\" returns successfully" Mar 14 00:21:38.957667 kubelet[2810]: E0314 00:21:38.957622 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:38.962482 kubelet[2810]: E0314 00:21:38.958268 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:38.969837 kubelet[2810]: E0314 00:21:38.969635 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:39.965031 kubelet[2810]: E0314 00:21:39.964847 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:39.965031 kubelet[2810]: E0314 00:21:39.964881 2810 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:40.094285 kubelet[2810]: I0314 00:21:40.094242 2810 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-47" Mar 14 00:21:40.101634 kubelet[2810]: E0314 00:21:40.101592 2810 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-47\" not found" node="ip-172-31-23-47" Mar 14 00:21:40.233726 kubelet[2810]: I0314 00:21:40.233531 2810 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-23-47" Mar 14 00:21:40.233726 kubelet[2810]: E0314 00:21:40.233572 2810 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-47\": node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.256841 kubelet[2810]: E0314 00:21:40.256803 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.357306 kubelet[2810]: E0314 00:21:40.357250 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.457620 kubelet[2810]: E0314 00:21:40.457468 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.558252 kubelet[2810]: E0314 00:21:40.558113 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.658909 kubelet[2810]: E0314 00:21:40.658859 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.665019 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:21:40.759749 kubelet[2810]: E0314 00:21:40.759684 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.860464 kubelet[2810]: E0314 00:21:40.860324 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:40.960917 kubelet[2810]: E0314 00:21:40.960874 2810 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-23-47\" not found" Mar 14 00:21:41.102614 kubelet[2810]: I0314 00:21:41.102264 2810 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:41.120047 kubelet[2810]: I0314 00:21:41.119707 2810 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-47" Mar 14 00:21:41.133938 kubelet[2810]: I0314 00:21:41.133905 2810 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:41.871803 kubelet[2810]: I0314 00:21:41.871526 2810 apiserver.go:52] "Watching apiserver" Mar 14 00:21:41.902322 kubelet[2810]: I0314 00:21:41.902274 2810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:21:42.423805 systemd[1]: Reloading requested from client PID 3095 ('systemctl') (unit session-7.scope)... Mar 14 00:21:42.423823 systemd[1]: Reloading... Mar 14 00:21:42.519474 zram_generator::config[3135]: No configuration found. Mar 14 00:21:42.663720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:21:42.771654 systemd[1]: Reloading finished in 347 ms. Mar 14 00:21:42.817815 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:42.832003 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:21:42.832306 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:42.839008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:21:43.084014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:21:43.097861 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:21:43.164421 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:21:43.172190 kubelet[3195]: I0314 00:21:43.172147 3195 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:21:43.172313 kubelet[3195]: I0314 00:21:43.172306 3195 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:21:43.172366 kubelet[3195]: I0314 00:21:43.172360 3195 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:21:43.172485 kubelet[3195]: I0314 00:21:43.172455 3195 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:21:43.172761 kubelet[3195]: I0314 00:21:43.172738 3195 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:21:43.173990 kubelet[3195]: I0314 00:21:43.173965 3195 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:21:43.177586 kubelet[3195]: I0314 00:21:43.177428 3195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:21:43.180606 kubelet[3195]: E0314 00:21:43.180583 3195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:21:43.180985 kubelet[3195]: I0314 00:21:43.180775 3195 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:21:43.186438 kubelet[3195]: I0314 00:21:43.184750 3195 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:21:43.187878 kubelet[3195]: I0314 00:21:43.187690 3195 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:21:43.188180 kubelet[3195]: I0314 00:21:43.187985 3195 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:21:43.188315 kubelet[3195]: I0314 00:21:43.188304 3195 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:21:43.188360 kubelet[3195]: I0314 00:21:43.188354 3195 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:21:43.188447 kubelet[3195]: I0314 00:21:43.188437 3195 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:21:43.190556 kubelet[3195]: I0314 00:21:43.190492 3195 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:21:43.190806 kubelet[3195]: I0314 00:21:43.190793 3195 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:21:43.190896 kubelet[3195]: I0314 00:21:43.190885 3195 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:21:43.194701 kubelet[3195]: I0314 00:21:43.194685 3195 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:21:43.194787 kubelet[3195]: I0314 00:21:43.194780 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:21:43.199543 kubelet[3195]: I0314 00:21:43.199509 3195 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:21:43.202600 kubelet[3195]: I0314 00:21:43.202578 3195 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:21:43.202705 kubelet[3195]: I0314 00:21:43.202624 3195 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:21:43.209257 kubelet[3195]: I0314 00:21:43.209193 3195 server.go:1257] "Started kubelet" Mar 14 00:21:43.215788 kubelet[3195]: I0314 00:21:43.215595 3195 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:21:43.219559 kubelet[3195]: I0314 00:21:43.218964 3195 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:21:43.220121 kubelet[3195]: I0314 00:21:43.220051 3195 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:21:43.224619 kubelet[3195]: I0314 00:21:43.224571 3195 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:21:43.225320 kubelet[3195]: I0314 00:21:43.224721 3195 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:21:43.225320 kubelet[3195]: I0314 00:21:43.224888 3195 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:21:43.225320 kubelet[3195]: I0314 00:21:43.225183 3195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:21:43.229903 kubelet[3195]: I0314 00:21:43.229693 3195 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:21:43.230120 kubelet[3195]: I0314 00:21:43.230102 3195 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:21:43.230276 kubelet[3195]: I0314 00:21:43.230261 3195 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:21:43.232861 kubelet[3195]: I0314 00:21:43.232828 3195 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:21:43.237982 kubelet[3195]: I0314 00:21:43.237625 3195 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:21:43.237982 kubelet[3195]: I0314 00:21:43.237645 3195 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:21:43.243040 kubelet[3195]: I0314 00:21:43.242993 3195 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:21:43.244540 kubelet[3195]: I0314 00:21:43.244517 3195 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:21:43.244664 kubelet[3195]: I0314 00:21:43.244654 3195 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:21:43.244758 kubelet[3195]: I0314 00:21:43.244748 3195 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:21:43.244876 kubelet[3195]: E0314 00:21:43.244855 3195 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291176 3195 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291191 3195 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291209 3195 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291328 3195 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291338 3195 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291355 3195 policy_none.go:50] "Start" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291363 3195 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:21:43.291441 kubelet[3195]: I0314 00:21:43.291372 3195 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:21:43.294291 kubelet[3195]: I0314 00:21:43.293820 3195 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:21:43.294291 kubelet[3195]: I0314 00:21:43.293843 3195 policy_none.go:44] "Start" Mar 14 00:21:43.301044 kubelet[3195]: E0314 00:21:43.300644 3195 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:21:43.301044 kubelet[3195]: I0314 00:21:43.300850 3195 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:21:43.301044 kubelet[3195]: I0314 00:21:43.300876 3195 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:21:43.301343 kubelet[3195]: I0314 00:21:43.301322 3195 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:21:43.304255 kubelet[3195]: E0314 00:21:43.304194 3195 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:21:43.346654 kubelet[3195]: I0314 00:21:43.346357 3195 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.346654 kubelet[3195]: I0314 00:21:43.346594 3195 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-47" Mar 14 00:21:43.351802 kubelet[3195]: I0314 00:21:43.346357 3195 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:43.360499 kubelet[3195]: E0314 00:21:43.360366 3195 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-47\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-47" Mar 14 00:21:43.361309 kubelet[3195]: E0314 00:21:43.361203 3195 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-47\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.361309 kubelet[3195]: E0314 00:21:43.361276 3195 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-47\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:43.412649 kubelet[3195]: I0314 00:21:43.412619 3195 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-23-47" Mar 14 00:21:43.424142 kubelet[3195]: I0314 00:21:43.424103 3195 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-23-47" Mar 14 00:21:43.424279 kubelet[3195]: I0314 00:21:43.424179 3195 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-23-47" Mar 14 00:21:43.454994 sudo[3233]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:21:43.455479 sudo[3233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:21:43.533543 kubelet[3195]: I0314 00:21:43.533119 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e8c3f196db3d0828eaf902542551d9f5-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-47\" (UID: \"e8c3f196db3d0828eaf902542551d9f5\") " pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:43.533543 kubelet[3195]: I0314 00:21:43.533172 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e8c3f196db3d0828eaf902542551d9f5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-47\" (UID: \"e8c3f196db3d0828eaf902542551d9f5\") " pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:43.533543 kubelet[3195]: I0314 00:21:43.533201 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.533543 kubelet[3195]: I0314 00:21:43.533246 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e8c3f196db3d0828eaf902542551d9f5-ca-certs\") pod \"kube-apiserver-ip-172-31-23-47\" (UID: \"e8c3f196db3d0828eaf902542551d9f5\") " pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:43.533543 kubelet[3195]: I0314 00:21:43.533274 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.533878 kubelet[3195]: I0314 00:21:43.533301 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.533878 kubelet[3195]: I0314 00:21:43.533328 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.533878 kubelet[3195]: I0314 00:21:43.533382 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c0a23b7c2b8b5513c1b971f079267b86-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-47\" (UID: \"c0a23b7c2b8b5513c1b971f079267b86\") " pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:43.533878 kubelet[3195]: I0314 00:21:43.533428 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61e45e89fa668fa36222cca7258361d2-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-47\" (UID: \"61e45e89fa668fa36222cca7258361d2\") " pod="kube-system/kube-scheduler-ip-172-31-23-47" Mar 14 00:21:44.157302 sudo[3233]: pam_unix(sudo:session): session closed for user root Mar 14 00:21:44.199387 kubelet[3195]: I0314 00:21:44.199338 3195 apiserver.go:52] "Watching apiserver" Mar 14 00:21:44.230318 kubelet[3195]: I0314 00:21:44.230260 3195 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:21:44.285111 kubelet[3195]: I0314 00:21:44.284147 3195 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:44.286980 kubelet[3195]: I0314 00:21:44.285792 3195 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:44.300834 kubelet[3195]: E0314 00:21:44.300002 3195 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-47\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-47" Mar 14 00:21:44.306246 kubelet[3195]: E0314 00:21:44.306054 3195 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-47\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-47" Mar 14 00:21:44.342432 kubelet[3195]: I0314 00:21:44.340375 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-47" podStartSLOduration=3.340359951 podStartE2EDuration="3.340359951s" podCreationTimestamp="2026-03-14 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:21:44.339837487 +0000 UTC m=+1.237159718" watchObservedRunningTime="2026-03-14 00:21:44.340359951 +0000 UTC m=+1.237682187" Mar 14 00:21:44.368197 kubelet[3195]: I0314 00:21:44.368080 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-47" podStartSLOduration=3.368061181 podStartE2EDuration="3.368061181s" podCreationTimestamp="2026-03-14 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:21:44.35420633 +0000 UTC m=+1.251528568" watchObservedRunningTime="2026-03-14 00:21:44.368061181 +0000 UTC m=+1.265383406" Mar 14 00:21:44.368497 kubelet[3195]: I0314 00:21:44.368341 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-47" podStartSLOduration=3.368333489 podStartE2EDuration="3.368333489s" podCreationTimestamp="2026-03-14 00:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:21:44.367596211 +0000 UTC m=+1.264918446" watchObservedRunningTime="2026-03-14 00:21:44.368333489 +0000 UTC m=+1.265655726" Mar 14 00:21:45.861264 sudo[2300]: pam_unix(sudo:session): session closed for user root Mar 14 00:21:45.938341 sshd[2297]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:45.942831 systemd-logind[1962]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:21:45.943097 systemd[1]: sshd@6-172.31.23.47:22-68.220.241.50:39724.service: Deactivated successfully. Mar 14 00:21:45.945751 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:21:45.945953 systemd[1]: session-7.scope: Consumed 4.084s CPU time, 149.3M memory peak, 0B memory swap peak. Mar 14 00:21:45.946955 systemd-logind[1962]: Removed session 7. Mar 14 00:21:48.749857 kubelet[3195]: I0314 00:21:48.749811 3195 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:21:48.750355 containerd[1984]: time="2026-03-14T00:21:48.750296006Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:21:48.750723 kubelet[3195]: I0314 00:21:48.750531 3195 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:21:49.844077 systemd[1]: Created slice kubepods-burstable-pod2c05e857_e645_4351_9a22_c2489fb18543.slice - libcontainer container kubepods-burstable-pod2c05e857_e645_4351_9a22_c2489fb18543.slice. Mar 14 00:21:49.875101 systemd[1]: Created slice kubepods-besteffort-pod13e4a0c3_55ef_44e4_95dd_2f3b3a473d6f.slice - libcontainer container kubepods-besteffort-pod13e4a0c3_55ef_44e4_95dd_2f3b3a473d6f.slice. Mar 14 00:21:49.877548 kubelet[3195]: I0314 00:21:49.877513 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-hostproc\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.877946 kubelet[3195]: I0314 00:21:49.877556 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-cgroup\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.877946 kubelet[3195]: I0314 00:21:49.877580 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cni-path\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.877946 kubelet[3195]: I0314 00:21:49.877629 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-xtables-lock\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.877946 kubelet[3195]: I0314 00:21:49.877652 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-etc-cni-netd\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.877946 kubelet[3195]: I0314 00:21:49.877681 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c05e857-e645-4351-9a22-c2489fb18543-cilium-config-path\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.877946 kubelet[3195]: I0314 00:21:49.877702 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-net\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878194 kubelet[3195]: I0314 00:21:49.877733 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f-xtables-lock\") pod \"kube-proxy-db2f8\" (UID: \"13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f\") " pod="kube-system/kube-proxy-db2f8" Mar 14 00:21:49.878194 kubelet[3195]: I0314 00:21:49.877759 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btl6r\" (UniqueName: \"kubernetes.io/projected/13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f-kube-api-access-btl6r\") pod \"kube-proxy-db2f8\" (UID: \"13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f\") " pod="kube-system/kube-proxy-db2f8" Mar 14 00:21:49.878194 kubelet[3195]: I0314 00:21:49.877789 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-bpf-maps\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878194 kubelet[3195]: I0314 00:21:49.877812 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-lib-modules\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878194 kubelet[3195]: I0314 00:21:49.877833 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-hubble-tls\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878194 kubelet[3195]: I0314 00:21:49.877854 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f-kube-proxy\") pod \"kube-proxy-db2f8\" (UID: \"13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f\") " pod="kube-system/kube-proxy-db2f8" Mar 14 00:21:49.878508 kubelet[3195]: I0314 00:21:49.877876 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f-lib-modules\") pod \"kube-proxy-db2f8\" (UID: \"13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f\") " pod="kube-system/kube-proxy-db2f8" Mar 14 00:21:49.878508 kubelet[3195]: I0314 00:21:49.877897 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-run\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878508 kubelet[3195]: I0314 00:21:49.877916 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c05e857-e645-4351-9a22-c2489fb18543-clustermesh-secrets\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878508 kubelet[3195]: I0314 00:21:49.877938 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-kernel\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.878508 kubelet[3195]: I0314 00:21:49.877970 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445pn\" (UniqueName: \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-kube-api-access-445pn\") pod \"cilium-fk9zk\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " pod="kube-system/cilium-fk9zk" Mar 14 00:21:49.947448 systemd[1]: Created slice kubepods-besteffort-pod0906451b_22e7_4a77_ad89_2d3271295240.slice - libcontainer container kubepods-besteffort-pod0906451b_22e7_4a77_ad89_2d3271295240.slice. Mar 14 00:21:49.978730 kubelet[3195]: I0314 00:21:49.978253 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0906451b-22e7-4a77-ad89-2d3271295240-cilium-config-path\") pod \"cilium-operator-78cf5644cb-mgk8t\" (UID: \"0906451b-22e7-4a77-ad89-2d3271295240\") " pod="kube-system/cilium-operator-78cf5644cb-mgk8t" Mar 14 00:21:49.978730 kubelet[3195]: I0314 00:21:49.978525 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x9bk\" (UniqueName: \"kubernetes.io/projected/0906451b-22e7-4a77-ad89-2d3271295240-kube-api-access-2x9bk\") pod \"cilium-operator-78cf5644cb-mgk8t\" (UID: \"0906451b-22e7-4a77-ad89-2d3271295240\") " pod="kube-system/cilium-operator-78cf5644cb-mgk8t" Mar 14 00:21:50.180171 containerd[1984]: time="2026-03-14T00:21:50.179674865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fk9zk,Uid:2c05e857-e645-4351-9a22-c2489fb18543,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:50.191046 containerd[1984]: time="2026-03-14T00:21:50.191004708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db2f8,Uid:13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:50.224280 containerd[1984]: time="2026-03-14T00:21:50.223501858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:50.224280 containerd[1984]: time="2026-03-14T00:21:50.223775745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:50.224280 containerd[1984]: time="2026-03-14T00:21:50.223795691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:50.224280 containerd[1984]: time="2026-03-14T00:21:50.224016647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:50.253982 containerd[1984]: time="2026-03-14T00:21:50.252873198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:50.253982 containerd[1984]: time="2026-03-14T00:21:50.252951542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:50.253982 containerd[1984]: time="2026-03-14T00:21:50.252970549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:50.253982 containerd[1984]: time="2026-03-14T00:21:50.253073731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:50.255631 systemd[1]: Started cri-containerd-d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82.scope - libcontainer container d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82. Mar 14 00:21:50.257468 containerd[1984]: time="2026-03-14T00:21:50.257429711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-mgk8t,Uid:0906451b-22e7-4a77-ad89-2d3271295240,Namespace:kube-system,Attempt:0,}" Mar 14 00:21:50.289632 systemd[1]: Started cri-containerd-8a3b30addc393703f89abb381989ff5ec21e576a9d89b8f8048a88da87ea0443.scope - libcontainer container 8a3b30addc393703f89abb381989ff5ec21e576a9d89b8f8048a88da87ea0443. Mar 14 00:21:50.333539 containerd[1984]: time="2026-03-14T00:21:50.332063469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:21:50.333539 containerd[1984]: time="2026-03-14T00:21:50.332150015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:21:50.333539 containerd[1984]: time="2026-03-14T00:21:50.332172938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:50.333539 containerd[1984]: time="2026-03-14T00:21:50.332277012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:21:50.335662 containerd[1984]: time="2026-03-14T00:21:50.335550300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fk9zk,Uid:2c05e857-e645-4351-9a22-c2489fb18543,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\"" Mar 14 00:21:50.342243 containerd[1984]: time="2026-03-14T00:21:50.341609046Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:21:50.365435 systemd[1]: Started cri-containerd-7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1.scope - libcontainer container 7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1. Mar 14 00:21:50.375144 containerd[1984]: time="2026-03-14T00:21:50.375077081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db2f8,Uid:13e4a0c3-55ef-44e4-95dd-2f3b3a473d6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a3b30addc393703f89abb381989ff5ec21e576a9d89b8f8048a88da87ea0443\"" Mar 14 00:21:50.390454 containerd[1984]: time="2026-03-14T00:21:50.390389293Z" level=info msg="CreateContainer within sandbox \"8a3b30addc393703f89abb381989ff5ec21e576a9d89b8f8048a88da87ea0443\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:21:50.420291 containerd[1984]: time="2026-03-14T00:21:50.419937459Z" level=info msg="CreateContainer within sandbox \"8a3b30addc393703f89abb381989ff5ec21e576a9d89b8f8048a88da87ea0443\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bba31c90a95ffd7c836a8f33055904c4b4f38a3005dcde0ab08ec5d359b20d3\"" Mar 14 00:21:50.421733 containerd[1984]: time="2026-03-14T00:21:50.421511419Z" level=info msg="StartContainer for \"6bba31c90a95ffd7c836a8f33055904c4b4f38a3005dcde0ab08ec5d359b20d3\"" Mar 14 00:21:50.444842 containerd[1984]: time="2026-03-14T00:21:50.444677299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-mgk8t,Uid:0906451b-22e7-4a77-ad89-2d3271295240,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\"" Mar 14 00:21:50.469620 systemd[1]: Started cri-containerd-6bba31c90a95ffd7c836a8f33055904c4b4f38a3005dcde0ab08ec5d359b20d3.scope - libcontainer container 6bba31c90a95ffd7c836a8f33055904c4b4f38a3005dcde0ab08ec5d359b20d3. Mar 14 00:21:50.501354 containerd[1984]: time="2026-03-14T00:21:50.501247440Z" level=info msg="StartContainer for \"6bba31c90a95ffd7c836a8f33055904c4b4f38a3005dcde0ab08ec5d359b20d3\" returns successfully" Mar 14 00:21:51.316072 kubelet[3195]: I0314 00:21:51.313801 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-db2f8" podStartSLOduration=2.313775469 podStartE2EDuration="2.313775469s" podCreationTimestamp="2026-03-14 00:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:21:51.313572143 +0000 UTC m=+8.210894376" watchObservedRunningTime="2026-03-14 00:21:51.313775469 +0000 UTC m=+8.211097702" Mar 14 00:21:55.245222 update_engine[1963]: I20260314 00:21:55.245149 1963 update_attempter.cc:509] Updating boot flags... Mar 14 00:21:55.322484 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3580) Mar 14 00:21:55.526574 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3580) Mar 14 00:21:55.750245 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3580) Mar 14 00:21:59.211671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817467589.mount: Deactivated successfully. Mar 14 00:22:01.780122 containerd[1984]: time="2026-03-14T00:22:01.780040024Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:22:01.788667 containerd[1984]: time="2026-03-14T00:22:01.787473851Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.445815305s" Mar 14 00:22:01.788667 containerd[1984]: time="2026-03-14T00:22:01.787529104Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:22:01.846542 containerd[1984]: time="2026-03-14T00:22:01.845728001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:01.848310 containerd[1984]: time="2026-03-14T00:22:01.847829510Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:01.848310 containerd[1984]: time="2026-03-14T00:22:01.847970265Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:22:01.853523 containerd[1984]: time="2026-03-14T00:22:01.853483232Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:22:01.949161 containerd[1984]: time="2026-03-14T00:22:01.949125304Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\"" Mar 14 00:22:01.949795 containerd[1984]: time="2026-03-14T00:22:01.949756399Z" level=info msg="StartContainer for \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\"" Mar 14 00:22:02.132641 systemd[1]: Started cri-containerd-428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6.scope - libcontainer container 428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6. Mar 14 00:22:02.180081 containerd[1984]: time="2026-03-14T00:22:02.180033515Z" level=info msg="StartContainer for \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\" returns successfully" Mar 14 00:22:02.189702 systemd[1]: cri-containerd-428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6.scope: Deactivated successfully. Mar 14 00:22:02.380779 containerd[1984]: time="2026-03-14T00:22:02.361076571Z" level=info msg="shim disconnected" id=428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6 namespace=k8s.io Mar 14 00:22:02.380779 containerd[1984]: time="2026-03-14T00:22:02.380542882Z" level=warning msg="cleaning up after shim disconnected" id=428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6 namespace=k8s.io Mar 14 00:22:02.380779 containerd[1984]: time="2026-03-14T00:22:02.380564275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:22:02.937345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6-rootfs.mount: Deactivated successfully. Mar 14 00:22:03.160123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2451475125.mount: Deactivated successfully. Mar 14 00:22:03.394152 containerd[1984]: time="2026-03-14T00:22:03.394001335Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:22:03.412691 containerd[1984]: time="2026-03-14T00:22:03.412641717Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\"" Mar 14 00:22:03.414896 containerd[1984]: time="2026-03-14T00:22:03.414865193Z" level=info msg="StartContainer for \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\"" Mar 14 00:22:03.448637 systemd[1]: Started cri-containerd-6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a.scope - libcontainer container 6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a. Mar 14 00:22:03.476717 containerd[1984]: time="2026-03-14T00:22:03.476666884Z" level=info msg="StartContainer for \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\" returns successfully" Mar 14 00:22:03.489861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:22:03.490720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:03.490820 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:03.496610 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:03.496908 systemd[1]: cri-containerd-6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a.scope: Deactivated successfully. Mar 14 00:22:03.545321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:03.557916 containerd[1984]: time="2026-03-14T00:22:03.557850084Z" level=info msg="shim disconnected" id=6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a namespace=k8s.io Mar 14 00:22:03.557916 containerd[1984]: time="2026-03-14T00:22:03.557910249Z" level=warning msg="cleaning up after shim disconnected" id=6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a namespace=k8s.io Mar 14 00:22:03.558192 containerd[1984]: time="2026-03-14T00:22:03.557921797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:22:03.936924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a-rootfs.mount: Deactivated successfully. Mar 14 00:22:04.353958 containerd[1984]: time="2026-03-14T00:22:04.353909931Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:22:04.390004 containerd[1984]: time="2026-03-14T00:22:04.389853287Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\"" Mar 14 00:22:04.391005 containerd[1984]: time="2026-03-14T00:22:04.390965789Z" level=info msg="StartContainer for \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\"" Mar 14 00:22:04.432090 systemd[1]: run-containerd-runc-k8s.io-40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0-runc.cBCSRz.mount: Deactivated successfully. Mar 14 00:22:04.439603 systemd[1]: Started cri-containerd-40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0.scope - libcontainer container 40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0. Mar 14 00:22:04.473950 systemd[1]: cri-containerd-40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0.scope: Deactivated successfully. Mar 14 00:22:04.476093 containerd[1984]: time="2026-03-14T00:22:04.475847747Z" level=info msg="StartContainer for \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\" returns successfully" Mar 14 00:22:04.509534 containerd[1984]: time="2026-03-14T00:22:04.509134237Z" level=info msg="shim disconnected" id=40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0 namespace=k8s.io Mar 14 00:22:04.509534 containerd[1984]: time="2026-03-14T00:22:04.509191938Z" level=warning msg="cleaning up after shim disconnected" id=40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0 namespace=k8s.io Mar 14 00:22:04.509534 containerd[1984]: time="2026-03-14T00:22:04.509201243Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:22:04.937018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0-rootfs.mount: Deactivated successfully. Mar 14 00:22:05.360219 containerd[1984]: time="2026-03-14T00:22:05.360015447Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:22:05.389163 containerd[1984]: time="2026-03-14T00:22:05.388754405Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\"" Mar 14 00:22:05.391877 containerd[1984]: time="2026-03-14T00:22:05.390732724Z" level=info msg="StartContainer for \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\"" Mar 14 00:22:05.449631 systemd[1]: Started cri-containerd-8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394.scope - libcontainer container 8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394. Mar 14 00:22:05.475971 systemd[1]: cri-containerd-8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394.scope: Deactivated successfully. Mar 14 00:22:05.478467 containerd[1984]: time="2026-03-14T00:22:05.478245481Z" level=info msg="StartContainer for \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\" returns successfully" Mar 14 00:22:05.513595 containerd[1984]: time="2026-03-14T00:22:05.513508238Z" level=info msg="shim disconnected" id=8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394 namespace=k8s.io Mar 14 00:22:05.513595 containerd[1984]: time="2026-03-14T00:22:05.513570629Z" level=warning msg="cleaning up after shim disconnected" id=8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394 namespace=k8s.io Mar 14 00:22:05.513595 containerd[1984]: time="2026-03-14T00:22:05.513584702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:22:05.936977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394-rootfs.mount: Deactivated successfully. Mar 14 00:22:06.370707 containerd[1984]: time="2026-03-14T00:22:06.370662810Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:22:06.400776 containerd[1984]: time="2026-03-14T00:22:06.400733092Z" level=info msg="CreateContainer within sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\"" Mar 14 00:22:06.401363 containerd[1984]: time="2026-03-14T00:22:06.401327447Z" level=info msg="StartContainer for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\"" Mar 14 00:22:06.438609 systemd[1]: Started cri-containerd-a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e.scope - libcontainer container a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e. Mar 14 00:22:06.476809 containerd[1984]: time="2026-03-14T00:22:06.476760164Z" level=info msg="StartContainer for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" returns successfully" Mar 14 00:22:06.670671 kubelet[3195]: I0314 00:22:06.670406 3195 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:22:06.738017 systemd[1]: Created slice kubepods-burstable-poda5ddfab1_307b_4fb4_951c_9a9ec68527ef.slice - libcontainer container kubepods-burstable-poda5ddfab1_307b_4fb4_951c_9a9ec68527ef.slice. Mar 14 00:22:06.756060 systemd[1]: Created slice kubepods-burstable-pod85fe74b4_90c3_4cbf_acb2_df68405eec96.slice - libcontainer container kubepods-burstable-pod85fe74b4_90c3_4cbf_acb2_df68405eec96.slice. Mar 14 00:22:06.798417 kubelet[3195]: I0314 00:22:06.798355 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ddfab1-307b-4fb4-951c-9a9ec68527ef-config-volume\") pod \"coredns-7d764666f9-t4gzv\" (UID: \"a5ddfab1-307b-4fb4-951c-9a9ec68527ef\") " pod="kube-system/coredns-7d764666f9-t4gzv" Mar 14 00:22:06.798607 kubelet[3195]: I0314 00:22:06.798426 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85fe74b4-90c3-4cbf-acb2-df68405eec96-config-volume\") pod \"coredns-7d764666f9-kqr4g\" (UID: \"85fe74b4-90c3-4cbf-acb2-df68405eec96\") " pod="kube-system/coredns-7d764666f9-kqr4g" Mar 14 00:22:06.798607 kubelet[3195]: I0314 00:22:06.798456 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h6g7\" (UniqueName: \"kubernetes.io/projected/85fe74b4-90c3-4cbf-acb2-df68405eec96-kube-api-access-8h6g7\") pod \"coredns-7d764666f9-kqr4g\" (UID: \"85fe74b4-90c3-4cbf-acb2-df68405eec96\") " pod="kube-system/coredns-7d764666f9-kqr4g" Mar 14 00:22:06.798607 kubelet[3195]: I0314 00:22:06.798484 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxpp\" (UniqueName: \"kubernetes.io/projected/a5ddfab1-307b-4fb4-951c-9a9ec68527ef-kube-api-access-djxpp\") pod \"coredns-7d764666f9-t4gzv\" (UID: \"a5ddfab1-307b-4fb4-951c-9a9ec68527ef\") " pod="kube-system/coredns-7d764666f9-t4gzv" Mar 14 00:22:07.052147 containerd[1984]: time="2026-03-14T00:22:07.051727985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t4gzv,Uid:a5ddfab1-307b-4fb4-951c-9a9ec68527ef,Namespace:kube-system,Attempt:0,}" Mar 14 00:22:07.068867 containerd[1984]: time="2026-03-14T00:22:07.068506114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kqr4g,Uid:85fe74b4-90c3-4cbf-acb2-df68405eec96,Namespace:kube-system,Attempt:0,}" Mar 14 00:22:07.384900 kubelet[3195]: I0314 00:22:07.384747 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-fk9zk" podStartSLOduration=2.36803089 podStartE2EDuration="18.38472822s" podCreationTimestamp="2026-03-14 00:21:49 +0000 UTC" firstStartedPulling="2026-03-14 00:21:50.341139754 +0000 UTC m=+7.238461969" lastFinishedPulling="2026-03-14 00:22:06.357837066 +0000 UTC m=+23.255159299" observedRunningTime="2026-03-14 00:22:07.384425593 +0000 UTC m=+24.281747828" watchObservedRunningTime="2026-03-14 00:22:07.38472822 +0000 UTC m=+24.282050454" Mar 14 00:22:09.095379 containerd[1984]: time="2026-03-14T00:22:09.095325500Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:09.097196 containerd[1984]: time="2026-03-14T00:22:09.097020268Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:22:09.099664 containerd[1984]: time="2026-03-14T00:22:09.099173230Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:22:09.101174 containerd[1984]: time="2026-03-14T00:22:09.101021783Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 7.253015939s" Mar 14 00:22:09.101174 containerd[1984]: time="2026-03-14T00:22:09.101065737Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:22:09.108431 containerd[1984]: time="2026-03-14T00:22:09.108249802Z" level=info msg="CreateContainer within sandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:22:09.136673 containerd[1984]: time="2026-03-14T00:22:09.136622589Z" level=info msg="CreateContainer within sandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\"" Mar 14 00:22:09.138189 containerd[1984]: time="2026-03-14T00:22:09.137269188Z" level=info msg="StartContainer for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\"" Mar 14 00:22:09.178668 systemd[1]: Started cri-containerd-5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069.scope - libcontainer container 5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069. Mar 14 00:22:09.210620 containerd[1984]: time="2026-03-14T00:22:09.210424199Z" level=info msg="StartContainer for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" returns successfully" Mar 14 00:22:12.843738 systemd-networkd[1883]: cilium_host: Link UP Mar 14 00:22:12.844681 (udev-worker)[4295]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:12.845268 systemd-networkd[1883]: cilium_net: Link UP Mar 14 00:22:12.846668 systemd-networkd[1883]: cilium_net: Gained carrier Mar 14 00:22:12.846900 systemd-networkd[1883]: cilium_host: Gained carrier Mar 14 00:22:12.847127 (udev-worker)[4294]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:12.990630 systemd-networkd[1883]: cilium_vxlan: Link UP Mar 14 00:22:12.990640 systemd-networkd[1883]: cilium_vxlan: Gained carrier Mar 14 00:22:13.477677 systemd-networkd[1883]: cilium_net: Gained IPv6LL Mar 14 00:22:13.635521 kernel: NET: Registered PF_ALG protocol family Mar 14 00:22:13.669573 systemd-networkd[1883]: cilium_host: Gained IPv6LL Mar 14 00:22:14.182370 systemd-networkd[1883]: cilium_vxlan: Gained IPv6LL Mar 14 00:22:14.370223 systemd-networkd[1883]: lxc_health: Link UP Mar 14 00:22:14.379474 systemd-networkd[1883]: lxc_health: Gained carrier Mar 14 00:22:14.705305 systemd-networkd[1883]: lxc1dc3574439d6: Link UP Mar 14 00:22:14.713421 kernel: eth0: renamed from tmp4c22d Mar 14 00:22:14.720844 systemd-networkd[1883]: lxcc8e143a612f7: Link UP Mar 14 00:22:14.730237 kernel: eth0: renamed from tmpeafdf Mar 14 00:22:14.734886 systemd-networkd[1883]: lxc1dc3574439d6: Gained carrier Mar 14 00:22:14.736217 (udev-worker)[4628]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:14.737813 systemd-networkd[1883]: lxcc8e143a612f7: Gained carrier Mar 14 00:22:15.973626 systemd-networkd[1883]: lxc_health: Gained IPv6LL Mar 14 00:22:16.037668 systemd-networkd[1883]: lxcc8e143a612f7: Gained IPv6LL Mar 14 00:22:16.199886 kubelet[3195]: I0314 00:22:16.199813 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-mgk8t" podStartSLOduration=8.54539183 podStartE2EDuration="27.199795635s" podCreationTimestamp="2026-03-14 00:21:49 +0000 UTC" firstStartedPulling="2026-03-14 00:21:50.447620001 +0000 UTC m=+7.344942212" lastFinishedPulling="2026-03-14 00:22:09.102023802 +0000 UTC m=+25.999346017" observedRunningTime="2026-03-14 00:22:09.397007845 +0000 UTC m=+26.294330080" watchObservedRunningTime="2026-03-14 00:22:16.199795635 +0000 UTC m=+33.097117870" Mar 14 00:22:16.549718 systemd-networkd[1883]: lxc1dc3574439d6: Gained IPv6LL Mar 14 00:22:18.909138 ntpd[1954]: Listen normally on 8 cilium_host 192.168.0.252:123 Mar 14 00:22:18.909240 ntpd[1954]: Listen normally on 9 cilium_net [fe80::8411:dfff:fe06:f3f3%4]:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 8 cilium_host 192.168.0.252:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 9 cilium_net [fe80::8411:dfff:fe06:f3f3%4]:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 10 cilium_host [fe80::4c29:98ff:fec6:e34c%5]:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 11 cilium_vxlan [fe80::688e:9fff:fef3:67d7%6]:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 12 lxc_health [fe80::14f1:21ff:fe8a:12dd%8]:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 13 lxc1dc3574439d6 [fe80::5c46:c7ff:fe24:e359%10]:123 Mar 14 00:22:18.911518 ntpd[1954]: 14 Mar 00:22:18 ntpd[1954]: Listen normally on 14 lxcc8e143a612f7 [fe80::94d8:9cff:fe27:a18e%12]:123 Mar 14 00:22:18.909301 ntpd[1954]: Listen normally on 10 cilium_host [fe80::4c29:98ff:fec6:e34c%5]:123 Mar 14 00:22:18.909345 ntpd[1954]: Listen normally on 11 cilium_vxlan [fe80::688e:9fff:fef3:67d7%6]:123 Mar 14 00:22:18.909408 ntpd[1954]: Listen normally on 12 lxc_health [fe80::14f1:21ff:fe8a:12dd%8]:123 Mar 14 00:22:18.909456 ntpd[1954]: Listen normally on 13 lxc1dc3574439d6 [fe80::5c46:c7ff:fe24:e359%10]:123 Mar 14 00:22:18.909494 ntpd[1954]: Listen normally on 14 lxcc8e143a612f7 [fe80::94d8:9cff:fe27:a18e%12]:123 Mar 14 00:22:19.118289 containerd[1984]: time="2026-03-14T00:22:19.117029222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:22:19.118289 containerd[1984]: time="2026-03-14T00:22:19.117101945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:22:19.118289 containerd[1984]: time="2026-03-14T00:22:19.117139241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:19.118289 containerd[1984]: time="2026-03-14T00:22:19.117242419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:19.181644 systemd[1]: Started cri-containerd-eafdf87e3ba4889792637e4a2741cf6b1c94cd4c6df9029766e27e4d6079a808.scope - libcontainer container eafdf87e3ba4889792637e4a2741cf6b1c94cd4c6df9029766e27e4d6079a808. Mar 14 00:22:19.281246 containerd[1984]: time="2026-03-14T00:22:19.281190409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kqr4g,Uid:85fe74b4-90c3-4cbf-acb2-df68405eec96,Namespace:kube-system,Attempt:0,} returns sandbox id \"eafdf87e3ba4889792637e4a2741cf6b1c94cd4c6df9029766e27e4d6079a808\"" Mar 14 00:22:19.292466 containerd[1984]: time="2026-03-14T00:22:19.292413405Z" level=info msg="CreateContainer within sandbox \"eafdf87e3ba4889792637e4a2741cf6b1c94cd4c6df9029766e27e4d6079a808\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:22:19.325795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575610437.mount: Deactivated successfully. Mar 14 00:22:19.335301 containerd[1984]: time="2026-03-14T00:22:19.335237260Z" level=info msg="CreateContainer within sandbox \"eafdf87e3ba4889792637e4a2741cf6b1c94cd4c6df9029766e27e4d6079a808\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dd09f55d7310e6703041db132b3e0cb4ad97bbc56df4157fd0cd7a97a32219f\"" Mar 14 00:22:19.337659 containerd[1984]: time="2026-03-14T00:22:19.337620002Z" level=info msg="StartContainer for \"3dd09f55d7310e6703041db132b3e0cb4ad97bbc56df4157fd0cd7a97a32219f\"" Mar 14 00:22:19.374184 containerd[1984]: time="2026-03-14T00:22:19.372082417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:22:19.374184 containerd[1984]: time="2026-03-14T00:22:19.372172178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:22:19.374184 containerd[1984]: time="2026-03-14T00:22:19.372194220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:19.374184 containerd[1984]: time="2026-03-14T00:22:19.372310489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:22:19.379660 systemd[1]: Started cri-containerd-3dd09f55d7310e6703041db132b3e0cb4ad97bbc56df4157fd0cd7a97a32219f.scope - libcontainer container 3dd09f55d7310e6703041db132b3e0cb4ad97bbc56df4157fd0cd7a97a32219f. Mar 14 00:22:19.411611 systemd[1]: Started cri-containerd-4c22d25b27ddfd7ab4a2bb1871c471e5c672634d8cce9423bf2781857e7bdd1e.scope - libcontainer container 4c22d25b27ddfd7ab4a2bb1871c471e5c672634d8cce9423bf2781857e7bdd1e. Mar 14 00:22:19.442998 containerd[1984]: time="2026-03-14T00:22:19.442956649Z" level=info msg="StartContainer for \"3dd09f55d7310e6703041db132b3e0cb4ad97bbc56df4157fd0cd7a97a32219f\" returns successfully" Mar 14 00:22:19.471027 containerd[1984]: time="2026-03-14T00:22:19.470992579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-t4gzv,Uid:a5ddfab1-307b-4fb4-951c-9a9ec68527ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c22d25b27ddfd7ab4a2bb1871c471e5c672634d8cce9423bf2781857e7bdd1e\"" Mar 14 00:22:19.479027 containerd[1984]: time="2026-03-14T00:22:19.478852783Z" level=info msg="CreateContainer within sandbox \"4c22d25b27ddfd7ab4a2bb1871c471e5c672634d8cce9423bf2781857e7bdd1e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:22:19.503730 containerd[1984]: time="2026-03-14T00:22:19.503580726Z" level=info msg="CreateContainer within sandbox \"4c22d25b27ddfd7ab4a2bb1871c471e5c672634d8cce9423bf2781857e7bdd1e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16fe1112fd73bf6874a7eae4dd3460c8bbe4e7b3cda8a8448874329bb2ee502f\"" Mar 14 00:22:19.507236 containerd[1984]: time="2026-03-14T00:22:19.505121177Z" level=info msg="StartContainer for \"16fe1112fd73bf6874a7eae4dd3460c8bbe4e7b3cda8a8448874329bb2ee502f\"" Mar 14 00:22:19.560587 systemd[1]: Started cri-containerd-16fe1112fd73bf6874a7eae4dd3460c8bbe4e7b3cda8a8448874329bb2ee502f.scope - libcontainer container 16fe1112fd73bf6874a7eae4dd3460c8bbe4e7b3cda8a8448874329bb2ee502f. Mar 14 00:22:19.599209 containerd[1984]: time="2026-03-14T00:22:19.598936243Z" level=info msg="StartContainer for \"16fe1112fd73bf6874a7eae4dd3460c8bbe4e7b3cda8a8448874329bb2ee502f\" returns successfully" Mar 14 00:22:19.725152 systemd[1]: Started sshd@7-172.31.23.47:22-68.220.241.50:50860.service - OpenSSH per-connection server daemon (68.220.241.50:50860). Mar 14 00:22:20.258209 sshd[4824]: Accepted publickey for core from 68.220.241.50 port 50860 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:20.261027 sshd[4824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:20.267247 systemd-logind[1962]: New session 8 of user core. Mar 14 00:22:20.274635 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:22:20.425632 kubelet[3195]: I0314 00:22:20.425531 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kqr4g" podStartSLOduration=31.425505555 podStartE2EDuration="31.425505555s" podCreationTimestamp="2026-03-14 00:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:22:20.424644774 +0000 UTC m=+37.321967011" watchObservedRunningTime="2026-03-14 00:22:20.425505555 +0000 UTC m=+37.322827791" Mar 14 00:22:20.441466 kubelet[3195]: I0314 00:22:20.441399 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-t4gzv" podStartSLOduration=31.441354557 podStartE2EDuration="31.441354557s" podCreationTimestamp="2026-03-14 00:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:22:20.440989784 +0000 UTC m=+37.338312018" watchObservedRunningTime="2026-03-14 00:22:20.441354557 +0000 UTC m=+37.338676793" Mar 14 00:22:21.224195 sshd[4824]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:21.228288 systemd[1]: sshd@7-172.31.23.47:22-68.220.241.50:50860.service: Deactivated successfully. Mar 14 00:22:21.230768 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:22:21.232745 systemd-logind[1962]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:22:21.234084 systemd-logind[1962]: Removed session 8. Mar 14 00:22:26.318769 systemd[1]: Started sshd@8-172.31.23.47:22-68.220.241.50:54890.service - OpenSSH per-connection server daemon (68.220.241.50:54890). Mar 14 00:22:26.804535 sshd[4857]: Accepted publickey for core from 68.220.241.50 port 54890 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:26.806020 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:26.811648 systemd-logind[1962]: New session 9 of user core. Mar 14 00:22:26.818614 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:22:27.247223 sshd[4857]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:27.252237 systemd[1]: sshd@8-172.31.23.47:22-68.220.241.50:54890.service: Deactivated successfully. Mar 14 00:22:27.254657 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:22:27.255719 systemd-logind[1962]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:22:27.256818 systemd-logind[1962]: Removed session 9. Mar 14 00:22:32.335757 systemd[1]: Started sshd@9-172.31.23.47:22-68.220.241.50:46562.service - OpenSSH per-connection server daemon (68.220.241.50:46562). Mar 14 00:22:32.826778 sshd[4871]: Accepted publickey for core from 68.220.241.50 port 46562 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:32.828327 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:32.833331 systemd-logind[1962]: New session 10 of user core. Mar 14 00:22:32.837597 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:22:33.235715 sshd[4871]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:33.239978 systemd[1]: sshd@9-172.31.23.47:22-68.220.241.50:46562.service: Deactivated successfully. Mar 14 00:22:33.242601 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:22:33.243816 systemd-logind[1962]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:22:33.246773 systemd-logind[1962]: Removed session 10. Mar 14 00:22:38.328808 systemd[1]: Started sshd@10-172.31.23.47:22-68.220.241.50:46566.service - OpenSSH per-connection server daemon (68.220.241.50:46566). Mar 14 00:22:38.805134 sshd[4885]: Accepted publickey for core from 68.220.241.50 port 46566 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:38.806662 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:38.812302 systemd-logind[1962]: New session 11 of user core. Mar 14 00:22:38.819656 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:22:39.216932 sshd[4885]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:39.221376 systemd[1]: sshd@10-172.31.23.47:22-68.220.241.50:46566.service: Deactivated successfully. Mar 14 00:22:39.223767 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:22:39.224845 systemd-logind[1962]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:22:39.225921 systemd-logind[1962]: Removed session 11. Mar 14 00:22:39.310876 systemd[1]: Started sshd@11-172.31.23.47:22-68.220.241.50:46570.service - OpenSSH per-connection server daemon (68.220.241.50:46570). Mar 14 00:22:39.805623 sshd[4899]: Accepted publickey for core from 68.220.241.50 port 46570 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:39.807226 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:39.812533 systemd-logind[1962]: New session 12 of user core. Mar 14 00:22:39.820601 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:22:40.305384 sshd[4899]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:40.310435 systemd[1]: sshd@11-172.31.23.47:22-68.220.241.50:46570.service: Deactivated successfully. Mar 14 00:22:40.312919 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:22:40.314701 systemd-logind[1962]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:22:40.316191 systemd-logind[1962]: Removed session 12. Mar 14 00:22:40.393757 systemd[1]: Started sshd@12-172.31.23.47:22-68.220.241.50:46572.service - OpenSSH per-connection server daemon (68.220.241.50:46572). Mar 14 00:22:40.888443 sshd[4911]: Accepted publickey for core from 68.220.241.50 port 46572 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:40.889994 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:40.895855 systemd-logind[1962]: New session 13 of user core. Mar 14 00:22:40.902615 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:22:41.305301 sshd[4911]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:41.310473 systemd-logind[1962]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:22:41.311472 systemd[1]: sshd@12-172.31.23.47:22-68.220.241.50:46572.service: Deactivated successfully. Mar 14 00:22:41.313632 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:22:41.314865 systemd-logind[1962]: Removed session 13. Mar 14 00:22:46.399750 systemd[1]: Started sshd@13-172.31.23.47:22-68.220.241.50:52454.service - OpenSSH per-connection server daemon (68.220.241.50:52454). Mar 14 00:22:46.887313 sshd[4928]: Accepted publickey for core from 68.220.241.50 port 52454 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:46.888908 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:46.893312 systemd-logind[1962]: New session 14 of user core. Mar 14 00:22:46.902609 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:22:47.298187 sshd[4928]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:47.301521 systemd[1]: sshd@13-172.31.23.47:22-68.220.241.50:52454.service: Deactivated successfully. Mar 14 00:22:47.304339 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:22:47.306096 systemd-logind[1962]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:22:47.307663 systemd-logind[1962]: Removed session 14. Mar 14 00:22:52.390748 systemd[1]: Started sshd@14-172.31.23.47:22-68.220.241.50:56860.service - OpenSSH per-connection server daemon (68.220.241.50:56860). Mar 14 00:22:52.869615 sshd[4944]: Accepted publickey for core from 68.220.241.50 port 56860 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:52.871389 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:52.876450 systemd-logind[1962]: New session 15 of user core. Mar 14 00:22:52.880596 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:22:53.278490 sshd[4944]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:53.283151 systemd-logind[1962]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:22:53.283984 systemd[1]: sshd@14-172.31.23.47:22-68.220.241.50:56860.service: Deactivated successfully. Mar 14 00:22:53.286305 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:22:53.287840 systemd-logind[1962]: Removed session 15. Mar 14 00:22:53.366745 systemd[1]: Started sshd@15-172.31.23.47:22-68.220.241.50:56872.service - OpenSSH per-connection server daemon (68.220.241.50:56872). Mar 14 00:22:53.856273 sshd[4957]: Accepted publickey for core from 68.220.241.50 port 56872 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:53.856946 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:53.861498 systemd-logind[1962]: New session 16 of user core. Mar 14 00:22:53.867603 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:22:54.676355 sshd[4957]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:54.683843 systemd[1]: sshd@15-172.31.23.47:22-68.220.241.50:56872.service: Deactivated successfully. Mar 14 00:22:54.685865 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:22:54.686615 systemd-logind[1962]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:22:54.688125 systemd-logind[1962]: Removed session 16. Mar 14 00:22:54.765795 systemd[1]: Started sshd@16-172.31.23.47:22-68.220.241.50:56880.service - OpenSSH per-connection server daemon (68.220.241.50:56880). Mar 14 00:22:55.254224 sshd[4968]: Accepted publickey for core from 68.220.241.50 port 56880 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:55.255845 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:55.262088 systemd-logind[1962]: New session 17 of user core. Mar 14 00:22:55.267654 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:22:56.261753 sshd[4968]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:56.266012 systemd-logind[1962]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:22:56.266628 systemd[1]: sshd@16-172.31.23.47:22-68.220.241.50:56880.service: Deactivated successfully. Mar 14 00:22:56.269059 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:22:56.270428 systemd-logind[1962]: Removed session 17. Mar 14 00:22:56.354757 systemd[1]: Started sshd@17-172.31.23.47:22-68.220.241.50:56888.service - OpenSSH per-connection server daemon (68.220.241.50:56888). Mar 14 00:22:56.853820 sshd[4984]: Accepted publickey for core from 68.220.241.50 port 56888 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:56.855373 sshd[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:56.860579 systemd-logind[1962]: New session 18 of user core. Mar 14 00:22:56.865610 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:22:57.417600 sshd[4984]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:57.422320 systemd[1]: sshd@17-172.31.23.47:22-68.220.241.50:56888.service: Deactivated successfully. Mar 14 00:22:57.424566 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:22:57.425388 systemd-logind[1962]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:22:57.426662 systemd-logind[1962]: Removed session 18. Mar 14 00:22:57.511773 systemd[1]: Started sshd@18-172.31.23.47:22-68.220.241.50:56896.service - OpenSSH per-connection server daemon (68.220.241.50:56896). Mar 14 00:22:57.998106 sshd[4997]: Accepted publickey for core from 68.220.241.50 port 56896 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:57.998811 sshd[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:58.003654 systemd-logind[1962]: New session 19 of user core. Mar 14 00:22:58.007596 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:22:58.415557 sshd[4997]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:58.420136 systemd[1]: sshd@18-172.31.23.47:22-68.220.241.50:56896.service: Deactivated successfully. Mar 14 00:22:58.422332 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:22:58.423515 systemd-logind[1962]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:22:58.424749 systemd-logind[1962]: Removed session 19. Mar 14 00:23:03.508805 systemd[1]: Started sshd@19-172.31.23.47:22-68.220.241.50:60328.service - OpenSSH per-connection server daemon (68.220.241.50:60328). Mar 14 00:23:03.998326 sshd[5010]: Accepted publickey for core from 68.220.241.50 port 60328 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:04.000023 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:04.005039 systemd-logind[1962]: New session 20 of user core. Mar 14 00:23:04.011621 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:23:04.408074 sshd[5010]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:04.413289 systemd[1]: sshd@19-172.31.23.47:22-68.220.241.50:60328.service: Deactivated successfully. Mar 14 00:23:04.415919 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:23:04.416923 systemd-logind[1962]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:23:04.418557 systemd-logind[1962]: Removed session 20. Mar 14 00:23:09.499789 systemd[1]: Started sshd@20-172.31.23.47:22-68.220.241.50:60344.service - OpenSSH per-connection server daemon (68.220.241.50:60344). Mar 14 00:23:09.976238 sshd[5023]: Accepted publickey for core from 68.220.241.50 port 60344 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:09.977837 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:09.983533 systemd-logind[1962]: New session 21 of user core. Mar 14 00:23:09.990621 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:23:10.381627 sshd[5023]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:10.386221 systemd[1]: sshd@20-172.31.23.47:22-68.220.241.50:60344.service: Deactivated successfully. Mar 14 00:23:10.388638 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:23:10.389413 systemd-logind[1962]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:23:10.390476 systemd-logind[1962]: Removed session 21. Mar 14 00:23:15.479133 systemd[1]: Started sshd@21-172.31.23.47:22-68.220.241.50:47732.service - OpenSSH per-connection server daemon (68.220.241.50:47732). Mar 14 00:23:15.963954 sshd[5039]: Accepted publickey for core from 68.220.241.50 port 47732 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:15.965719 sshd[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:15.973466 systemd-logind[1962]: New session 22 of user core. Mar 14 00:23:15.977764 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:23:16.369740 sshd[5039]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:16.373846 systemd-logind[1962]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:23:16.374762 systemd[1]: sshd@21-172.31.23.47:22-68.220.241.50:47732.service: Deactivated successfully. Mar 14 00:23:16.377221 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:23:16.378259 systemd-logind[1962]: Removed session 22. Mar 14 00:23:21.463779 systemd[1]: Started sshd@22-172.31.23.47:22-68.220.241.50:47744.service - OpenSSH per-connection server daemon (68.220.241.50:47744). Mar 14 00:23:21.947437 sshd[5054]: Accepted publickey for core from 68.220.241.50 port 47744 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:21.948795 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:21.955988 systemd-logind[1962]: New session 23 of user core. Mar 14 00:23:21.964681 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:23:22.372813 sshd[5054]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:22.376855 systemd[1]: sshd@22-172.31.23.47:22-68.220.241.50:47744.service: Deactivated successfully. Mar 14 00:23:22.379483 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:23:22.380436 systemd-logind[1962]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:23:22.381623 systemd-logind[1962]: Removed session 23. Mar 14 00:23:22.462728 systemd[1]: Started sshd@23-172.31.23.47:22-68.220.241.50:32818.service - OpenSSH per-connection server daemon (68.220.241.50:32818). Mar 14 00:23:22.946529 sshd[5067]: Accepted publickey for core from 68.220.241.50 port 32818 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:22.948138 sshd[5067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:22.953977 systemd-logind[1962]: New session 24 of user core. Mar 14 00:23:22.958587 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:23:24.940797 systemd[1]: run-containerd-runc-k8s.io-a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e-runc.I3Y4O3.mount: Deactivated successfully. Mar 14 00:23:24.977703 containerd[1984]: time="2026-03-14T00:23:24.977635079Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:23:24.994144 containerd[1984]: time="2026-03-14T00:23:24.994076324Z" level=info msg="StopContainer for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" with timeout 2 (s)" Mar 14 00:23:24.994356 containerd[1984]: time="2026-03-14T00:23:24.994094768Z" level=info msg="StopContainer for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" with timeout 30 (s)" Mar 14 00:23:24.994496 containerd[1984]: time="2026-03-14T00:23:24.994467343Z" level=info msg="Stop container \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" with signal terminated" Mar 14 00:23:24.995100 containerd[1984]: time="2026-03-14T00:23:24.995050527Z" level=info msg="Stop container \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" with signal terminated" Mar 14 00:23:25.009589 systemd-networkd[1883]: lxc_health: Link DOWN Mar 14 00:23:25.009601 systemd-networkd[1883]: lxc_health: Lost carrier Mar 14 00:23:25.020710 systemd[1]: cri-containerd-5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069.scope: Deactivated successfully. Mar 14 00:23:25.041656 systemd[1]: cri-containerd-a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e.scope: Deactivated successfully. Mar 14 00:23:25.043093 systemd[1]: cri-containerd-a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e.scope: Consumed 8.482s CPU time. Mar 14 00:23:25.065890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069-rootfs.mount: Deactivated successfully. Mar 14 00:23:25.079307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e-rootfs.mount: Deactivated successfully. Mar 14 00:23:25.096620 containerd[1984]: time="2026-03-14T00:23:25.096425233Z" level=info msg="shim disconnected" id=5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069 namespace=k8s.io Mar 14 00:23:25.096620 containerd[1984]: time="2026-03-14T00:23:25.096489580Z" level=warning msg="cleaning up after shim disconnected" id=5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069 namespace=k8s.io Mar 14 00:23:25.096620 containerd[1984]: time="2026-03-14T00:23:25.096505248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:25.096964 containerd[1984]: time="2026-03-14T00:23:25.096508889Z" level=info msg="shim disconnected" id=a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e namespace=k8s.io Mar 14 00:23:25.096964 containerd[1984]: time="2026-03-14T00:23:25.096807764Z" level=warning msg="cleaning up after shim disconnected" id=a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e namespace=k8s.io Mar 14 00:23:25.096964 containerd[1984]: time="2026-03-14T00:23:25.096818564Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:25.118476 containerd[1984]: time="2026-03-14T00:23:25.118338861Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:23:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:23:25.125745 containerd[1984]: time="2026-03-14T00:23:25.125700017Z" level=info msg="StopContainer for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" returns successfully" Mar 14 00:23:25.128558 containerd[1984]: time="2026-03-14T00:23:25.126403395Z" level=info msg="StopPodSandbox for \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\"" Mar 14 00:23:25.128558 containerd[1984]: time="2026-03-14T00:23:25.126447229Z" level=info msg="Container to stop \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:23:25.130106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1-shm.mount: Deactivated successfully. Mar 14 00:23:25.132511 containerd[1984]: time="2026-03-14T00:23:25.132028798Z" level=info msg="StopContainer for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" returns successfully" Mar 14 00:23:25.132831 containerd[1984]: time="2026-03-14T00:23:25.132806175Z" level=info msg="StopPodSandbox for \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\"" Mar 14 00:23:25.132950 containerd[1984]: time="2026-03-14T00:23:25.132932220Z" level=info msg="Container to stop \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:23:25.133034 containerd[1984]: time="2026-03-14T00:23:25.133017001Z" level=info msg="Container to stop \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:23:25.133110 containerd[1984]: time="2026-03-14T00:23:25.133094192Z" level=info msg="Container to stop \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:23:25.133287 containerd[1984]: time="2026-03-14T00:23:25.133169588Z" level=info msg="Container to stop \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:23:25.133287 containerd[1984]: time="2026-03-14T00:23:25.133186254Z" level=info msg="Container to stop \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:23:25.140829 systemd[1]: cri-containerd-7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1.scope: Deactivated successfully. Mar 14 00:23:25.144869 systemd[1]: cri-containerd-d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82.scope: Deactivated successfully. Mar 14 00:23:25.188060 containerd[1984]: time="2026-03-14T00:23:25.187992912Z" level=info msg="shim disconnected" id=d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82 namespace=k8s.io Mar 14 00:23:25.188510 containerd[1984]: time="2026-03-14T00:23:25.188092025Z" level=warning msg="cleaning up after shim disconnected" id=d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82 namespace=k8s.io Mar 14 00:23:25.188510 containerd[1984]: time="2026-03-14T00:23:25.188106339Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:25.189807 containerd[1984]: time="2026-03-14T00:23:25.188867200Z" level=info msg="shim disconnected" id=7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1 namespace=k8s.io Mar 14 00:23:25.189807 containerd[1984]: time="2026-03-14T00:23:25.188941003Z" level=warning msg="cleaning up after shim disconnected" id=7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1 namespace=k8s.io Mar 14 00:23:25.189807 containerd[1984]: time="2026-03-14T00:23:25.188957632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:25.214175 containerd[1984]: time="2026-03-14T00:23:25.213793796Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:23:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:23:25.221433 containerd[1984]: time="2026-03-14T00:23:25.220388910Z" level=info msg="TearDown network for sandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" successfully" Mar 14 00:23:25.221433 containerd[1984]: time="2026-03-14T00:23:25.220496978Z" level=info msg="StopPodSandbox for \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" returns successfully" Mar 14 00:23:25.221798 containerd[1984]: time="2026-03-14T00:23:25.221694493Z" level=info msg="TearDown network for sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" successfully" Mar 14 00:23:25.221798 containerd[1984]: time="2026-03-14T00:23:25.221722725Z" level=info msg="StopPodSandbox for \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" returns successfully" Mar 14 00:23:25.321454 kubelet[3195]: I0314 00:23:25.320645 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/2c05e857-e645-4351-9a22-c2489fb18543-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c05e857-e645-4351-9a22-c2489fb18543-clustermesh-secrets\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.321454 kubelet[3195]: I0314 00:23:25.320707 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-kube-api-access-445pn\" (UniqueName: \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-kube-api-access-445pn\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.321454 kubelet[3195]: I0314 00:23:25.320739 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-net\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.321454 kubelet[3195]: I0314 00:23:25.320763 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-bpf-maps\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.321454 kubelet[3195]: I0314 00:23:25.320799 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/0906451b-22e7-4a77-ad89-2d3271295240-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0906451b-22e7-4a77-ad89-2d3271295240-cilium-config-path\") pod \"0906451b-22e7-4a77-ad89-2d3271295240\" (UID: \"0906451b-22e7-4a77-ad89-2d3271295240\") " Mar 14 00:23:25.322115 kubelet[3195]: I0314 00:23:25.320825 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cni-path\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cni-path\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322115 kubelet[3195]: I0314 00:23:25.320851 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/2c05e857-e645-4351-9a22-c2489fb18543-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c05e857-e645-4351-9a22-c2489fb18543-cilium-config-path\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322115 kubelet[3195]: I0314 00:23:25.320883 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/0906451b-22e7-4a77-ad89-2d3271295240-kube-api-access-2x9bk\" (UniqueName: \"kubernetes.io/projected/0906451b-22e7-4a77-ad89-2d3271295240-kube-api-access-2x9bk\") pod \"0906451b-22e7-4a77-ad89-2d3271295240\" (UID: \"0906451b-22e7-4a77-ad89-2d3271295240\") " Mar 14 00:23:25.322115 kubelet[3195]: I0314 00:23:25.320907 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-kernel\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322115 kubelet[3195]: I0314 00:23:25.320935 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-etc-cni-netd\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322829 kubelet[3195]: I0314 00:23:25.320959 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-run\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322829 kubelet[3195]: I0314 00:23:25.320982 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-lib-modules\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322829 kubelet[3195]: I0314 00:23:25.321006 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-hostproc\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-hostproc\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322829 kubelet[3195]: I0314 00:23:25.321031 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-xtables-lock\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.322829 kubelet[3195]: I0314 00:23:25.321056 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-cgroup\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.323074 kubelet[3195]: I0314 00:23:25.321083 3195 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-hubble-tls\") pod \"2c05e857-e645-4351-9a22-c2489fb18543\" (UID: \"2c05e857-e645-4351-9a22-c2489fb18543\") " Mar 14 00:23:25.324351 kubelet[3195]: I0314 00:23:25.323912 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-kernel" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.324351 kubelet[3195]: I0314 00:23:25.324010 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-etc-cni-netd" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.324351 kubelet[3195]: I0314 00:23:25.324144 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-run" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.324351 kubelet[3195]: I0314 00:23:25.324167 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-lib-modules" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.324351 kubelet[3195]: I0314 00:23:25.324292 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-hostproc" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.325701 kubelet[3195]: I0314 00:23:25.324315 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-xtables-lock" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.325792 kubelet[3195]: I0314 00:23:25.324336 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-cgroup" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.326974 kubelet[3195]: I0314 00:23:25.326931 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-net" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.327132 kubelet[3195]: I0314 00:23:25.327114 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-bpf-maps" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.328201 kubelet[3195]: I0314 00:23:25.328172 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-hubble-tls" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:23:25.328285 kubelet[3195]: I0314 00:23:25.328217 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cni-path" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:23:25.329968 kubelet[3195]: I0314 00:23:25.329939 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-kube-api-access-445pn" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "kube-api-access-445pn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:23:25.331947 kubelet[3195]: I0314 00:23:25.331922 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c05e857-e645-4351-9a22-c2489fb18543-clustermesh-secrets" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:23:25.332103 kubelet[3195]: I0314 00:23:25.331928 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c05e857-e645-4351-9a22-c2489fb18543-cilium-config-path" pod "2c05e857-e645-4351-9a22-c2489fb18543" (UID: "2c05e857-e645-4351-9a22-c2489fb18543"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:23:25.333507 kubelet[3195]: I0314 00:23:25.333458 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0906451b-22e7-4a77-ad89-2d3271295240-kube-api-access-2x9bk" pod "0906451b-22e7-4a77-ad89-2d3271295240" (UID: "0906451b-22e7-4a77-ad89-2d3271295240"). InnerVolumeSpecName "kube-api-access-2x9bk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:23:25.333677 kubelet[3195]: I0314 00:23:25.333648 3195 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0906451b-22e7-4a77-ad89-2d3271295240-cilium-config-path" pod "0906451b-22e7-4a77-ad89-2d3271295240" (UID: "0906451b-22e7-4a77-ad89-2d3271295240"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:23:25.422187 kubelet[3195]: I0314 00:23:25.422138 3195 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-hostproc\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422187 kubelet[3195]: I0314 00:23:25.422173 3195 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-xtables-lock\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422187 kubelet[3195]: I0314 00:23:25.422187 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-cgroup\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422187 kubelet[3195]: I0314 00:23:25.422198 3195 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-hubble-tls\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422212 3195 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2c05e857-e645-4351-9a22-c2489fb18543-clustermesh-secrets\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422224 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-445pn\" (UniqueName: \"kubernetes.io/projected/2c05e857-e645-4351-9a22-c2489fb18543-kube-api-access-445pn\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422235 3195 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-net\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422245 3195 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-bpf-maps\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422256 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0906451b-22e7-4a77-ad89-2d3271295240-cilium-config-path\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422266 3195 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cni-path\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422275 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2c05e857-e645-4351-9a22-c2489fb18543-cilium-config-path\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422485 kubelet[3195]: I0314 00:23:25.422285 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2x9bk\" (UniqueName: \"kubernetes.io/projected/0906451b-22e7-4a77-ad89-2d3271295240-kube-api-access-2x9bk\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422681 kubelet[3195]: I0314 00:23:25.422296 3195 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-host-proc-sys-kernel\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422681 kubelet[3195]: I0314 00:23:25.422307 3195 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-etc-cni-netd\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422681 kubelet[3195]: I0314 00:23:25.422321 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-cilium-run\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.422681 kubelet[3195]: I0314 00:23:25.422332 3195 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c05e857-e645-4351-9a22-c2489fb18543-lib-modules\") on node \"ip-172-31-23-47\" DevicePath \"\"" Mar 14 00:23:25.558909 kubelet[3195]: I0314 00:23:25.558873 3195 scope.go:122] "RemoveContainer" containerID="5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069" Mar 14 00:23:25.562692 containerd[1984]: time="2026-03-14T00:23:25.562646741Z" level=info msg="RemoveContainer for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\"" Mar 14 00:23:25.573437 containerd[1984]: time="2026-03-14T00:23:25.573143284Z" level=info msg="RemoveContainer for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" returns successfully" Mar 14 00:23:25.573620 kubelet[3195]: I0314 00:23:25.573488 3195 scope.go:122] "RemoveContainer" containerID="5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069" Mar 14 00:23:25.578781 systemd[1]: Removed slice kubepods-besteffort-pod0906451b_22e7_4a77_ad89_2d3271295240.slice - libcontainer container kubepods-besteffort-pod0906451b_22e7_4a77_ad89_2d3271295240.slice. Mar 14 00:23:25.590818 systemd[1]: Removed slice kubepods-burstable-pod2c05e857_e645_4351_9a22_c2489fb18543.slice - libcontainer container kubepods-burstable-pod2c05e857_e645_4351_9a22_c2489fb18543.slice. Mar 14 00:23:25.591157 systemd[1]: kubepods-burstable-pod2c05e857_e645_4351_9a22_c2489fb18543.slice: Consumed 8.571s CPU time. Mar 14 00:23:25.598318 containerd[1984]: time="2026-03-14T00:23:25.581801470Z" level=error msg="ContainerStatus for \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\": not found" Mar 14 00:23:25.599746 kubelet[3195]: E0314 00:23:25.599193 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\": not found" containerID="5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069" Mar 14 00:23:25.599746 kubelet[3195]: I0314 00:23:25.599235 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069"} err="failed to get container status \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a2ce1f32479cfe53c6da5953dde24f4fa149439054fd35ad1e24f516dc84069\": not found" Mar 14 00:23:25.599746 kubelet[3195]: I0314 00:23:25.599276 3195 scope.go:122] "RemoveContainer" containerID="a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e" Mar 14 00:23:25.602884 containerd[1984]: time="2026-03-14T00:23:25.602494270Z" level=info msg="RemoveContainer for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\"" Mar 14 00:23:25.611154 containerd[1984]: time="2026-03-14T00:23:25.610980365Z" level=info msg="RemoveContainer for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" returns successfully" Mar 14 00:23:25.611734 kubelet[3195]: I0314 00:23:25.611710 3195 scope.go:122] "RemoveContainer" containerID="8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394" Mar 14 00:23:25.619105 containerd[1984]: time="2026-03-14T00:23:25.618248371Z" level=info msg="RemoveContainer for \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\"" Mar 14 00:23:25.648084 containerd[1984]: time="2026-03-14T00:23:25.648035630Z" level=info msg="RemoveContainer for \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\" returns successfully" Mar 14 00:23:25.648342 kubelet[3195]: I0314 00:23:25.648312 3195 scope.go:122] "RemoveContainer" containerID="40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0" Mar 14 00:23:25.649510 containerd[1984]: time="2026-03-14T00:23:25.649478184Z" level=info msg="RemoveContainer for \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\"" Mar 14 00:23:25.657612 containerd[1984]: time="2026-03-14T00:23:25.657422817Z" level=info msg="RemoveContainer for \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\" returns successfully" Mar 14 00:23:25.658018 kubelet[3195]: I0314 00:23:25.657898 3195 scope.go:122] "RemoveContainer" containerID="6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a" Mar 14 00:23:25.659166 containerd[1984]: time="2026-03-14T00:23:25.659131202Z" level=info msg="RemoveContainer for \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\"" Mar 14 00:23:25.664714 containerd[1984]: time="2026-03-14T00:23:25.664668251Z" level=info msg="RemoveContainer for \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\" returns successfully" Mar 14 00:23:25.665005 kubelet[3195]: I0314 00:23:25.664972 3195 scope.go:122] "RemoveContainer" containerID="428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6" Mar 14 00:23:25.666299 containerd[1984]: time="2026-03-14T00:23:25.666153307Z" level=info msg="RemoveContainer for \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\"" Mar 14 00:23:25.673511 containerd[1984]: time="2026-03-14T00:23:25.673456666Z" level=info msg="RemoveContainer for \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\" returns successfully" Mar 14 00:23:25.673720 kubelet[3195]: I0314 00:23:25.673681 3195 scope.go:122] "RemoveContainer" containerID="a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e" Mar 14 00:23:25.674003 containerd[1984]: time="2026-03-14T00:23:25.673961082Z" level=error msg="ContainerStatus for \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\": not found" Mar 14 00:23:25.674167 kubelet[3195]: E0314 00:23:25.674133 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\": not found" containerID="a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e" Mar 14 00:23:25.674247 kubelet[3195]: I0314 00:23:25.674173 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e"} err="failed to get container status \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a28b93738c18e4ecfc2249305d2fa17ce6d47a279113f7997d46cbbd4876a32e\": not found" Mar 14 00:23:25.674247 kubelet[3195]: I0314 00:23:25.674200 3195 scope.go:122] "RemoveContainer" containerID="8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394" Mar 14 00:23:25.674460 containerd[1984]: time="2026-03-14T00:23:25.674421731Z" level=error msg="ContainerStatus for \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\": not found" Mar 14 00:23:25.674589 kubelet[3195]: E0314 00:23:25.674560 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\": not found" containerID="8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394" Mar 14 00:23:25.674659 kubelet[3195]: I0314 00:23:25.674593 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394"} err="failed to get container status \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bc8b190e730acb498b0348386e92e0d667a0472a5a4e49d2ccbd8be3c6bc394\": not found" Mar 14 00:23:25.674659 kubelet[3195]: I0314 00:23:25.674614 3195 scope.go:122] "RemoveContainer" containerID="40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0" Mar 14 00:23:25.674884 containerd[1984]: time="2026-03-14T00:23:25.674842848Z" level=error msg="ContainerStatus for \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\": not found" Mar 14 00:23:25.675036 kubelet[3195]: E0314 00:23:25.674995 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\": not found" containerID="40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0" Mar 14 00:23:25.675036 kubelet[3195]: I0314 00:23:25.675032 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0"} err="failed to get container status \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\": rpc error: code = NotFound desc = an error occurred when try to find container \"40677cd6d602e53b089a2266adb6c2f52c4b958dc2ebb5c82276486d14e17dc0\": not found" Mar 14 00:23:25.675036 kubelet[3195]: I0314 00:23:25.675051 3195 scope.go:122] "RemoveContainer" containerID="6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a" Mar 14 00:23:25.675280 containerd[1984]: time="2026-03-14T00:23:25.675240128Z" level=error msg="ContainerStatus for \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\": not found" Mar 14 00:23:25.675385 kubelet[3195]: E0314 00:23:25.675362 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\": not found" containerID="6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a" Mar 14 00:23:25.675504 kubelet[3195]: I0314 00:23:25.675387 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a"} err="failed to get container status \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e76500461a1abbcb0f521502edf8a0a53a59bcf5e72b3cc6f2d7ae3f1e68a5a\": not found" Mar 14 00:23:25.675504 kubelet[3195]: I0314 00:23:25.675428 3195 scope.go:122] "RemoveContainer" containerID="428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6" Mar 14 00:23:25.675635 containerd[1984]: time="2026-03-14T00:23:25.675599262Z" level=error msg="ContainerStatus for \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\": not found" Mar 14 00:23:25.676202 kubelet[3195]: E0314 00:23:25.675740 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\": not found" containerID="428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6" Mar 14 00:23:25.676202 kubelet[3195]: I0314 00:23:25.675774 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6"} err="failed to get container status \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\": rpc error: code = NotFound desc = an error occurred when try to find container \"428028c9e022940f93aac3d752be938d72b4bf2614f038d8a99ac9661493ffa6\": not found" Mar 14 00:23:25.929429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1-rootfs.mount: Deactivated successfully. Mar 14 00:23:25.929829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82-rootfs.mount: Deactivated successfully. Mar 14 00:23:25.929926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82-shm.mount: Deactivated successfully. Mar 14 00:23:25.930013 systemd[1]: var-lib-kubelet-pods-0906451b\x2d22e7\x2d4a77\x2dad89\x2d2d3271295240-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2x9bk.mount: Deactivated successfully. Mar 14 00:23:25.930100 systemd[1]: var-lib-kubelet-pods-2c05e857\x2de645\x2d4351\x2d9a22\x2dc2489fb18543-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d445pn.mount: Deactivated successfully. Mar 14 00:23:25.930188 systemd[1]: var-lib-kubelet-pods-2c05e857\x2de645\x2d4351\x2d9a22\x2dc2489fb18543-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:23:25.930279 systemd[1]: var-lib-kubelet-pods-2c05e857\x2de645\x2d4351\x2d9a22\x2dc2489fb18543-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:23:26.922418 sshd[5067]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:26.926087 systemd[1]: sshd@23-172.31.23.47:22-68.220.241.50:32818.service: Deactivated successfully. Mar 14 00:23:26.928624 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:23:26.928829 systemd[1]: session-24.scope: Consumed 1.041s CPU time. Mar 14 00:23:26.930725 systemd-logind[1962]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:23:26.931940 systemd-logind[1962]: Removed session 24. Mar 14 00:23:27.011767 systemd[1]: Started sshd@24-172.31.23.47:22-68.220.241.50:32832.service - OpenSSH per-connection server daemon (68.220.241.50:32832). Mar 14 00:23:27.247853 kubelet[3195]: I0314 00:23:27.247804 3195 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0906451b-22e7-4a77-ad89-2d3271295240" path="/var/lib/kubelet/pods/0906451b-22e7-4a77-ad89-2d3271295240/volumes" Mar 14 00:23:27.248416 kubelet[3195]: I0314 00:23:27.248375 3195 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2c05e857-e645-4351-9a22-c2489fb18543" path="/var/lib/kubelet/pods/2c05e857-e645-4351-9a22-c2489fb18543/volumes" Mar 14 00:23:27.492425 sshd[5230]: Accepted publickey for core from 68.220.241.50 port 32832 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:27.494042 sshd[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:27.500071 systemd-logind[1962]: New session 25 of user core. Mar 14 00:23:27.504609 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:23:27.909095 ntpd[1954]: Deleting interface #12 lxc_health, fe80::14f1:21ff:fe8a:12dd%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Mar 14 00:23:27.909539 ntpd[1954]: 14 Mar 00:23:27 ntpd[1954]: Deleting interface #12 lxc_health, fe80::14f1:21ff:fe8a:12dd%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Mar 14 00:23:28.340556 kubelet[3195]: E0314 00:23:28.340461 3195 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:23:28.395536 sshd[5230]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:28.406068 systemd[1]: sshd@24-172.31.23.47:22-68.220.241.50:32832.service: Deactivated successfully. Mar 14 00:23:28.413807 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:23:28.421256 systemd-logind[1962]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:23:28.430149 systemd-logind[1962]: Removed session 25. Mar 14 00:23:28.444736 kubelet[3195]: I0314 00:23:28.444603 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-cilium-cgroup\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.444736 kubelet[3195]: I0314 00:23:28.444652 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-host-proc-sys-kernel\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.444736 kubelet[3195]: I0314 00:23:28.444683 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42nz\" (UniqueName: \"kubernetes.io/projected/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-kube-api-access-t42nz\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.444736 kubelet[3195]: I0314 00:23:28.444711 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-bpf-maps\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.447767 kubelet[3195]: I0314 00:23:28.445075 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-etc-cni-netd\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.447767 kubelet[3195]: I0314 00:23:28.445106 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-lib-modules\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.447767 kubelet[3195]: I0314 00:23:28.445133 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-host-proc-sys-net\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.447767 kubelet[3195]: I0314 00:23:28.445157 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-xtables-lock\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.447767 kubelet[3195]: I0314 00:23:28.445179 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-hubble-tls\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.447767 kubelet[3195]: I0314 00:23:28.445203 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-cilium-config-path\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.448127 kubelet[3195]: I0314 00:23:28.445232 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-hostproc\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.448127 kubelet[3195]: I0314 00:23:28.447454 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-cni-path\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.448127 kubelet[3195]: I0314 00:23:28.447490 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-clustermesh-secrets\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.448127 kubelet[3195]: I0314 00:23:28.447513 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-cilium-ipsec-secrets\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.448127 kubelet[3195]: I0314 00:23:28.447537 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f0423455-ac3e-442d-8f2d-7f155fbcb8ba-cilium-run\") pod \"cilium-bw56g\" (UID: \"f0423455-ac3e-442d-8f2d-7f155fbcb8ba\") " pod="kube-system/cilium-bw56g" Mar 14 00:23:28.461815 systemd[1]: Created slice kubepods-burstable-podf0423455_ac3e_442d_8f2d_7f155fbcb8ba.slice - libcontainer container kubepods-burstable-podf0423455_ac3e_442d_8f2d_7f155fbcb8ba.slice. Mar 14 00:23:28.490665 systemd[1]: Started sshd@25-172.31.23.47:22-68.220.241.50:32848.service - OpenSSH per-connection server daemon (68.220.241.50:32848). Mar 14 00:23:28.773784 containerd[1984]: time="2026-03-14T00:23:28.773721702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bw56g,Uid:f0423455-ac3e-442d-8f2d-7f155fbcb8ba,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:28.807362 containerd[1984]: time="2026-03-14T00:23:28.807018765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:28.807362 containerd[1984]: time="2026-03-14T00:23:28.807137758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:28.807362 containerd[1984]: time="2026-03-14T00:23:28.807177226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:28.807362 containerd[1984]: time="2026-03-14T00:23:28.807291393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:28.834653 systemd[1]: Started cri-containerd-f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be.scope - libcontainer container f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be. Mar 14 00:23:28.861634 containerd[1984]: time="2026-03-14T00:23:28.861501107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bw56g,Uid:f0423455-ac3e-442d-8f2d-7f155fbcb8ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\"" Mar 14 00:23:28.871947 containerd[1984]: time="2026-03-14T00:23:28.871902221Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:23:28.891288 containerd[1984]: time="2026-03-14T00:23:28.891234240Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638\"" Mar 14 00:23:28.893276 containerd[1984]: time="2026-03-14T00:23:28.892249509Z" level=info msg="StartContainer for \"9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638\"" Mar 14 00:23:28.920604 systemd[1]: Started cri-containerd-9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638.scope - libcontainer container 9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638. Mar 14 00:23:28.950823 containerd[1984]: time="2026-03-14T00:23:28.950600908Z" level=info msg="StartContainer for \"9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638\" returns successfully" Mar 14 00:23:28.973582 systemd[1]: cri-containerd-9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638.scope: Deactivated successfully. Mar 14 00:23:28.975997 sshd[5241]: Accepted publickey for core from 68.220.241.50 port 32848 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:28.979854 sshd[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:28.987919 systemd-logind[1962]: New session 26 of user core. Mar 14 00:23:28.995602 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:23:29.034891 containerd[1984]: time="2026-03-14T00:23:29.034498421Z" level=info msg="shim disconnected" id=9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638 namespace=k8s.io Mar 14 00:23:29.034891 containerd[1984]: time="2026-03-14T00:23:29.034554822Z" level=warning msg="cleaning up after shim disconnected" id=9cdc90cf9eba7b191994621b8bb98321c7cca1cea69a9d2277ac268dc3fb5638 namespace=k8s.io Mar 14 00:23:29.034891 containerd[1984]: time="2026-03-14T00:23:29.034564127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:29.325048 sshd[5241]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:29.330033 systemd[1]: sshd@25-172.31.23.47:22-68.220.241.50:32848.service: Deactivated successfully. Mar 14 00:23:29.332224 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:23:29.333134 systemd-logind[1962]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:23:29.334247 systemd-logind[1962]: Removed session 26. Mar 14 00:23:29.421954 systemd[1]: Started sshd@26-172.31.23.47:22-68.220.241.50:32854.service - OpenSSH per-connection server daemon (68.220.241.50:32854). Mar 14 00:23:29.597486 containerd[1984]: time="2026-03-14T00:23:29.595347165Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:23:29.622543 containerd[1984]: time="2026-03-14T00:23:29.622497486Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5\"" Mar 14 00:23:29.623438 containerd[1984]: time="2026-03-14T00:23:29.623360684Z" level=info msg="StartContainer for \"911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5\"" Mar 14 00:23:29.671649 systemd[1]: Started cri-containerd-911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5.scope - libcontainer container 911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5. Mar 14 00:23:29.701322 containerd[1984]: time="2026-03-14T00:23:29.701284378Z" level=info msg="StartContainer for \"911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5\" returns successfully" Mar 14 00:23:29.715935 systemd[1]: cri-containerd-911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5.scope: Deactivated successfully. Mar 14 00:23:29.752212 containerd[1984]: time="2026-03-14T00:23:29.752142154Z" level=info msg="shim disconnected" id=911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5 namespace=k8s.io Mar 14 00:23:29.752212 containerd[1984]: time="2026-03-14T00:23:29.752209253Z" level=warning msg="cleaning up after shim disconnected" id=911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5 namespace=k8s.io Mar 14 00:23:29.752540 containerd[1984]: time="2026-03-14T00:23:29.752222356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:29.910849 sshd[5359]: Accepted publickey for core from 68.220.241.50 port 32854 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:29.912033 sshd[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:29.917479 systemd-logind[1962]: New session 27 of user core. Mar 14 00:23:29.924601 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:23:30.557987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-911ec001a1ab4644efec6c2a542902fdfe2acde0452811550851a3642bcc3ed5-rootfs.mount: Deactivated successfully. Mar 14 00:23:30.591896 containerd[1984]: time="2026-03-14T00:23:30.591729789Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:23:30.623710 containerd[1984]: time="2026-03-14T00:23:30.623662435Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146\"" Mar 14 00:23:30.624255 containerd[1984]: time="2026-03-14T00:23:30.624219427Z" level=info msg="StartContainer for \"1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146\"" Mar 14 00:23:30.674606 systemd[1]: Started cri-containerd-1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146.scope - libcontainer container 1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146. Mar 14 00:23:30.709665 containerd[1984]: time="2026-03-14T00:23:30.709608714Z" level=info msg="StartContainer for \"1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146\" returns successfully" Mar 14 00:23:30.716700 systemd[1]: cri-containerd-1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146.scope: Deactivated successfully. Mar 14 00:23:30.757743 containerd[1984]: time="2026-03-14T00:23:30.757658992Z" level=info msg="shim disconnected" id=1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146 namespace=k8s.io Mar 14 00:23:30.757743 containerd[1984]: time="2026-03-14T00:23:30.757738583Z" level=warning msg="cleaning up after shim disconnected" id=1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146 namespace=k8s.io Mar 14 00:23:30.757743 containerd[1984]: time="2026-03-14T00:23:30.757751419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:30.771948 containerd[1984]: time="2026-03-14T00:23:30.771887406Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:23:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:23:31.558084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cc6804072952475d673b2e6a1d86e1c35bc2da80b31cee2db881fc2c5970146-rootfs.mount: Deactivated successfully. Mar 14 00:23:31.597131 containerd[1984]: time="2026-03-14T00:23:31.597091688Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:23:31.626487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2615944898.mount: Deactivated successfully. Mar 14 00:23:31.629416 containerd[1984]: time="2026-03-14T00:23:31.629355820Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258\"" Mar 14 00:23:31.631885 containerd[1984]: time="2026-03-14T00:23:31.631841906Z" level=info msg="StartContainer for \"3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258\"" Mar 14 00:23:31.686634 systemd[1]: Started cri-containerd-3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258.scope - libcontainer container 3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258. Mar 14 00:23:31.743354 systemd[1]: cri-containerd-3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258.scope: Deactivated successfully. Mar 14 00:23:31.748888 containerd[1984]: time="2026-03-14T00:23:31.748712134Z" level=info msg="StartContainer for \"3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258\" returns successfully" Mar 14 00:23:31.784287 containerd[1984]: time="2026-03-14T00:23:31.784230143Z" level=info msg="shim disconnected" id=3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258 namespace=k8s.io Mar 14 00:23:31.784287 containerd[1984]: time="2026-03-14T00:23:31.784284786Z" level=warning msg="cleaning up after shim disconnected" id=3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258 namespace=k8s.io Mar 14 00:23:31.784693 containerd[1984]: time="2026-03-14T00:23:31.784296144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:23:32.558197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b48a2e3f313bc77e5c888467e6bcf8cba3752fe1f09abdebeb8d4632f58b258-rootfs.mount: Deactivated successfully. Mar 14 00:23:32.604064 containerd[1984]: time="2026-03-14T00:23:32.604014559Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:23:32.629822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156991667.mount: Deactivated successfully. Mar 14 00:23:32.632727 containerd[1984]: time="2026-03-14T00:23:32.632682344Z" level=info msg="CreateContainer within sandbox \"f91d62e106912154030c119f89b9e0dd229848228412ad6a9367e55351da81be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801\"" Mar 14 00:23:32.633460 containerd[1984]: time="2026-03-14T00:23:32.633345522Z" level=info msg="StartContainer for \"c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801\"" Mar 14 00:23:32.684593 systemd[1]: Started cri-containerd-c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801.scope - libcontainer container c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801. Mar 14 00:23:32.722357 containerd[1984]: time="2026-03-14T00:23:32.722297636Z" level=info msg="StartContainer for \"c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801\" returns successfully" Mar 14 00:23:33.311430 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:23:36.231141 systemd-networkd[1883]: lxc_health: Link UP Mar 14 00:23:36.233605 (udev-worker)[6100]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:23:36.238528 systemd-networkd[1883]: lxc_health: Gained carrier Mar 14 00:23:36.843678 systemd[1]: run-containerd-runc-k8s.io-c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801-runc.A5FMnQ.mount: Deactivated successfully. Mar 14 00:23:36.849818 kubelet[3195]: I0314 00:23:36.848885 3195 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-bw56g" podStartSLOduration=8.848867775 podStartE2EDuration="8.848867775s" podCreationTimestamp="2026-03-14 00:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:33.623893725 +0000 UTC m=+110.521215958" watchObservedRunningTime="2026-03-14 00:23:36.848867775 +0000 UTC m=+113.746190008" Mar 14 00:23:37.703506 systemd-networkd[1883]: lxc_health: Gained IPv6LL Mar 14 00:23:39.909158 ntpd[1954]: Listen normally on 15 lxc_health [fe80::f898:f2ff:fef3:1ead%14]:123 Mar 14 00:23:39.909774 ntpd[1954]: 14 Mar 00:23:39 ntpd[1954]: Listen normally on 15 lxc_health [fe80::f898:f2ff:fef3:1ead%14]:123 Mar 14 00:23:42.382373 systemd[1]: run-containerd-runc-k8s.io-c1b61c3605d34f5493eb60c3bc4a6041b5ddaade6df7dfa36d7430562a95b801-runc.15wv5h.mount: Deactivated successfully. Mar 14 00:23:43.243654 containerd[1984]: time="2026-03-14T00:23:43.243480685Z" level=info msg="StopPodSandbox for \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\"" Mar 14 00:23:43.246766 containerd[1984]: time="2026-03-14T00:23:43.246597240Z" level=info msg="TearDown network for sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" successfully" Mar 14 00:23:43.246766 containerd[1984]: time="2026-03-14T00:23:43.246683197Z" level=info msg="StopPodSandbox for \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" returns successfully" Mar 14 00:23:43.248630 containerd[1984]: time="2026-03-14T00:23:43.247829381Z" level=info msg="RemovePodSandbox for \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\"" Mar 14 00:23:43.248630 containerd[1984]: time="2026-03-14T00:23:43.247869794Z" level=info msg="Forcibly stopping sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\"" Mar 14 00:23:43.248630 containerd[1984]: time="2026-03-14T00:23:43.247932183Z" level=info msg="TearDown network for sandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" successfully" Mar 14 00:23:43.256057 containerd[1984]: time="2026-03-14T00:23:43.256012682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:23:43.256239 containerd[1984]: time="2026-03-14T00:23:43.256211219Z" level=info msg="RemovePodSandbox \"d2ef007924fc1bdcc63f8dcb983924c1efa5e9bc502cf5d7e18c5bbe22686d82\" returns successfully" Mar 14 00:23:43.407484 containerd[1984]: time="2026-03-14T00:23:43.407387381Z" level=info msg="StopPodSandbox for \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\"" Mar 14 00:23:43.407642 containerd[1984]: time="2026-03-14T00:23:43.407523166Z" level=info msg="TearDown network for sandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" successfully" Mar 14 00:23:43.407642 containerd[1984]: time="2026-03-14T00:23:43.407539580Z" level=info msg="StopPodSandbox for \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" returns successfully" Mar 14 00:23:43.407919 containerd[1984]: time="2026-03-14T00:23:43.407889305Z" level=info msg="RemovePodSandbox for \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\"" Mar 14 00:23:43.408014 containerd[1984]: time="2026-03-14T00:23:43.407921295Z" level=info msg="Forcibly stopping sandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\"" Mar 14 00:23:43.408014 containerd[1984]: time="2026-03-14T00:23:43.407991353Z" level=info msg="TearDown network for sandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" successfully" Mar 14 00:23:43.413386 containerd[1984]: time="2026-03-14T00:23:43.413337723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:23:43.413521 containerd[1984]: time="2026-03-14T00:23:43.413422358Z" level=info msg="RemovePodSandbox \"7ee9434b6c1dc7978924145a84593c24b15d992bf2c0a76d829f055daddb8fb1\" returns successfully" Mar 14 00:23:43.437770 sshd[5359]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:43.442152 systemd-logind[1962]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:23:43.443171 systemd[1]: sshd@26-172.31.23.47:22-68.220.241.50:32854.service: Deactivated successfully. Mar 14 00:23:43.445290 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:23:43.446984 systemd-logind[1962]: Removed session 27.