Mar 7 01:08:59.218077 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:08:59.218119 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:08:59.218140 kernel: BIOS-provided physical RAM map: Mar 7 01:08:59.218154 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:08:59.218166 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 7 01:08:59.218179 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 7 01:08:59.218195 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 7 01:08:59.218208 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 7 01:08:59.218222 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 7 01:08:59.218239 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 7 01:08:59.218252 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 7 01:08:59.218265 kernel: NX (Execute Disable) protection: active Mar 7 01:08:59.218279 kernel: APIC: Static calls initialized Mar 7 01:08:59.218293 kernel: efi: EFI v2.7 by EDK II Mar 7 01:08:59.218310 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 7 01:08:59.218328 kernel: SMBIOS 2.7 present. Mar 7 01:08:59.218343 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 7 01:08:59.218358 kernel: Hypervisor detected: KVM Mar 7 01:08:59.218373 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:08:59.218387 kernel: kvm-clock: using sched offset of 4127079696 cycles Mar 7 01:08:59.218403 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:08:59.218418 kernel: tsc: Detected 2499.994 MHz processor Mar 7 01:08:59.218434 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:08:59.218449 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:08:59.218463 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 7 01:08:59.218482 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:08:59.218497 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:08:59.218512 kernel: Using GB pages for direct mapping Mar 7 01:08:59.218528 kernel: Secure boot disabled Mar 7 01:08:59.218542 kernel: ACPI: Early table checksum verification disabled Mar 7 01:08:59.218557 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 7 01:08:59.218572 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 01:08:59.218587 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 01:08:59.218603 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 01:08:59.218621 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 7 01:08:59.218636 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 7 01:08:59.218652 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 01:08:59.218667 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 01:08:59.218682 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 7 01:08:59.218698 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 7 01:08:59.218720 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:08:59.218739 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:08:59.218754 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 7 01:08:59.218771 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 7 01:08:59.218787 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 7 01:08:59.218803 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 7 01:08:59.218819 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 7 01:08:59.218835 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 7 01:08:59.218854 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 7 01:08:59.218871 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 7 01:08:59.218887 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 7 01:08:59.219668 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 7 01:08:59.219690 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 7 01:08:59.219705 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 7 01:08:59.219720 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:08:59.219736 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:08:59.219752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 7 01:08:59.219773 kernel: NUMA: Initialized distance table, cnt=1 Mar 7 01:08:59.219787 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 7 01:08:59.219803 kernel: Zone ranges: Mar 7 01:08:59.219818 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:08:59.221512 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 7 01:08:59.222137 kernel: Normal empty Mar 7 01:08:59.222151 kernel: Movable zone start for each node Mar 7 01:08:59.222164 kernel: Early memory node ranges Mar 7 01:08:59.222179 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:08:59.222201 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 7 01:08:59.222218 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 7 01:08:59.222234 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 7 01:08:59.222250 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:08:59.222263 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:08:59.222277 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:08:59.222294 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 7 01:08:59.222310 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:08:59.222327 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:08:59.222346 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 7 01:08:59.222363 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:08:59.222378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:08:59.222395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:08:59.222411 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:08:59.222427 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:08:59.222444 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:08:59.222460 kernel: TSC deadline timer available Mar 7 01:08:59.222476 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:08:59.222492 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:08:59.222511 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 7 01:08:59.222528 kernel: Booting paravirtualized kernel on KVM Mar 7 01:08:59.222544 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:08:59.222560 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:08:59.222576 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:08:59.222593 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:08:59.222609 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:08:59.222625 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:08:59.222642 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:08:59.222664 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:08:59.222680 kernel: random: crng init done Mar 7 01:08:59.222697 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:08:59.222713 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:08:59.222729 kernel: Fallback order for Node 0: 0 Mar 7 01:08:59.222745 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 7 01:08:59.222761 kernel: Policy zone: DMA32 Mar 7 01:08:59.222778 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:08:59.222797 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 7 01:08:59.222814 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:08:59.222830 kernel: Kernel/User page tables isolation: enabled Mar 7 01:08:59.222846 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:08:59.222863 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:08:59.222879 kernel: Dynamic Preempt: voluntary Mar 7 01:08:59.225934 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:08:59.225968 kernel: rcu: RCU event tracing is enabled. Mar 7 01:08:59.225984 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:08:59.226007 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:08:59.226022 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:08:59.226037 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:08:59.226053 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:08:59.226068 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:08:59.226084 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:08:59.226099 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:08:59.226129 kernel: Console: colour dummy device 80x25 Mar 7 01:08:59.226145 kernel: printk: console [tty0] enabled Mar 7 01:08:59.226161 kernel: printk: console [ttyS0] enabled Mar 7 01:08:59.226177 kernel: ACPI: Core revision 20230628 Mar 7 01:08:59.226193 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 7 01:08:59.226212 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:08:59.226229 kernel: x2apic enabled Mar 7 01:08:59.226245 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:08:59.226261 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Mar 7 01:08:59.226278 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Mar 7 01:08:59.226298 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:08:59.226314 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:08:59.226330 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:08:59.226345 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:08:59.226361 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:08:59.226377 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:08:59.226393 kernel: RETBleed: Vulnerable Mar 7 01:08:59.226408 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:08:59.226424 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:08:59.226440 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:08:59.226459 kernel: GDS: Unknown: Dependent on hypervisor status Mar 7 01:08:59.226475 kernel: active return thunk: its_return_thunk Mar 7 01:08:59.226491 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:08:59.226507 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:08:59.226523 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:08:59.226539 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:08:59.226555 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 7 01:08:59.226571 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 7 01:08:59.226586 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:08:59.226602 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:08:59.226618 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:08:59.226637 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:08:59.226653 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:08:59.226669 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 7 01:08:59.226684 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 7 01:08:59.226700 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 7 01:08:59.226716 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 7 01:08:59.226732 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 7 01:08:59.226748 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 7 01:08:59.226764 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 7 01:08:59.226780 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:08:59.226796 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:08:59.226815 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:08:59.226830 kernel: landlock: Up and running. Mar 7 01:08:59.226846 kernel: SELinux: Initializing. Mar 7 01:08:59.226862 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:08:59.226878 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:08:59.226905 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Mar 7 01:08:59.226922 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:08:59.226938 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:08:59.226955 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:08:59.226971 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:08:59.226991 kernel: signal: max sigframe size: 3632 Mar 7 01:08:59.227007 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:08:59.227023 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:08:59.227039 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:08:59.227055 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:08:59.227071 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:08:59.227087 kernel: .... node #0, CPUs: #1 Mar 7 01:08:59.227104 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:08:59.227122 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:08:59.227141 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:08:59.227157 kernel: smpboot: Max logical packages: 1 Mar 7 01:08:59.227174 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Mar 7 01:08:59.227189 kernel: devtmpfs: initialized Mar 7 01:08:59.227206 kernel: x86/mm: Memory block size: 128MB Mar 7 01:08:59.227222 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 7 01:08:59.227238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:08:59.227254 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:08:59.227270 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:08:59.227289 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:08:59.227305 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:08:59.227321 kernel: audit: type=2000 audit(1772845738.951:1): state=initialized audit_enabled=0 res=1 Mar 7 01:08:59.227337 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:08:59.227353 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:08:59.227368 kernel: cpuidle: using governor menu Mar 7 01:08:59.227384 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:08:59.227401 kernel: dca service started, version 1.12.1 Mar 7 01:08:59.227417 kernel: PCI: Using configuration type 1 for base access Mar 7 01:08:59.227436 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:08:59.227452 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:08:59.227468 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:08:59.227484 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:08:59.227500 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:08:59.227516 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:08:59.227531 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:08:59.227547 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:08:59.227563 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:08:59.227583 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:08:59.227599 kernel: ACPI: Interpreter enabled Mar 7 01:08:59.227615 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:08:59.227631 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:08:59.227647 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:08:59.227663 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:08:59.227679 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 7 01:08:59.227695 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:08:59.231199 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:08:59.231430 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:08:59.231580 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:08:59.231604 kernel: acpiphp: Slot [3] registered Mar 7 01:08:59.231622 kernel: acpiphp: Slot [4] registered Mar 7 01:08:59.231638 kernel: acpiphp: Slot [5] registered Mar 7 01:08:59.231655 kernel: acpiphp: Slot [6] registered Mar 7 01:08:59.231671 kernel: acpiphp: Slot [7] registered Mar 7 01:08:59.231693 kernel: acpiphp: Slot [8] registered Mar 7 01:08:59.231709 kernel: acpiphp: Slot [9] registered Mar 7 01:08:59.231723 kernel: acpiphp: Slot [10] registered Mar 7 01:08:59.231736 kernel: acpiphp: Slot [11] registered Mar 7 01:08:59.231749 kernel: acpiphp: Slot [12] registered Mar 7 01:08:59.231763 kernel: acpiphp: Slot [13] registered Mar 7 01:08:59.231777 kernel: acpiphp: Slot [14] registered Mar 7 01:08:59.231790 kernel: acpiphp: Slot [15] registered Mar 7 01:08:59.231804 kernel: acpiphp: Slot [16] registered Mar 7 01:08:59.231817 kernel: acpiphp: Slot [17] registered Mar 7 01:08:59.231835 kernel: acpiphp: Slot [18] registered Mar 7 01:08:59.231848 kernel: acpiphp: Slot [19] registered Mar 7 01:08:59.231862 kernel: acpiphp: Slot [20] registered Mar 7 01:08:59.231878 kernel: acpiphp: Slot [21] registered Mar 7 01:08:59.231935 kernel: acpiphp: Slot [22] registered Mar 7 01:08:59.231950 kernel: acpiphp: Slot [23] registered Mar 7 01:08:59.231965 kernel: acpiphp: Slot [24] registered Mar 7 01:08:59.231981 kernel: acpiphp: Slot [25] registered Mar 7 01:08:59.231997 kernel: acpiphp: Slot [26] registered Mar 7 01:08:59.232017 kernel: acpiphp: Slot [27] registered Mar 7 01:08:59.232033 kernel: acpiphp: Slot [28] registered Mar 7 01:08:59.232049 kernel: acpiphp: Slot [29] registered Mar 7 01:08:59.232066 kernel: acpiphp: Slot [30] registered Mar 7 01:08:59.232082 kernel: acpiphp: Slot [31] registered Mar 7 01:08:59.232098 kernel: PCI host bridge to bus 0000:00 Mar 7 01:08:59.232267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:08:59.232391 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:08:59.232514 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:08:59.232634 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 7 01:08:59.232755 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 7 01:08:59.232877 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:08:59.234376 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:08:59.234539 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 7 01:08:59.234689 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 7 01:08:59.234831 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:08:59.236224 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 7 01:08:59.236393 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 7 01:08:59.236551 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 7 01:08:59.236704 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 7 01:08:59.236856 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 7 01:08:59.237844 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 7 01:08:59.238118 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 7 01:08:59.238261 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 7 01:08:59.238396 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:08:59.238531 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 7 01:08:59.238665 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:08:59.238814 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 01:08:59.238982 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 7 01:08:59.239142 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 01:08:59.239292 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 7 01:08:59.239314 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:08:59.239333 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:08:59.239351 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:08:59.239367 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:08:59.239382 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:08:59.239402 kernel: iommu: Default domain type: Translated Mar 7 01:08:59.239417 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:08:59.239433 kernel: efivars: Registered efivars operations Mar 7 01:08:59.239448 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:08:59.239464 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:08:59.239480 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 7 01:08:59.239495 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 7 01:08:59.239641 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 7 01:08:59.239810 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 7 01:08:59.240059 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:08:59.240083 kernel: vgaarb: loaded Mar 7 01:08:59.240101 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 7 01:08:59.240119 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 7 01:08:59.240136 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:08:59.240153 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:08:59.240170 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:08:59.240188 kernel: pnp: PnP ACPI init Mar 7 01:08:59.240205 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:08:59.240226 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:08:59.240244 kernel: NET: Registered PF_INET protocol family Mar 7 01:08:59.240261 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:08:59.240278 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 7 01:08:59.240294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:08:59.240311 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:08:59.240328 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 7 01:08:59.240345 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 7 01:08:59.240365 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:08:59.240382 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:08:59.240398 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:08:59.240415 kernel: NET: Registered PF_XDP protocol family Mar 7 01:08:59.240546 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:08:59.240668 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:08:59.240817 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:08:59.241019 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 7 01:08:59.241170 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 7 01:08:59.241342 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:08:59.241366 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:08:59.241383 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:08:59.241399 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Mar 7 01:08:59.241415 kernel: clocksource: Switched to clocksource tsc Mar 7 01:08:59.241430 kernel: Initialise system trusted keyrings Mar 7 01:08:59.241446 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 7 01:08:59.241461 kernel: Key type asymmetric registered Mar 7 01:08:59.241482 kernel: Asymmetric key parser 'x509' registered Mar 7 01:08:59.241498 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:08:59.241526 kernel: io scheduler mq-deadline registered Mar 7 01:08:59.241540 kernel: io scheduler kyber registered Mar 7 01:08:59.241555 kernel: io scheduler bfq registered Mar 7 01:08:59.241568 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:08:59.241583 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:08:59.241598 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:08:59.241614 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:08:59.241633 kernel: i8042: Warning: Keylock active Mar 7 01:08:59.241648 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:08:59.241663 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:08:59.241845 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:08:59.242045 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:08:59.242174 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:08:58 UTC (1772845738) Mar 7 01:08:59.242300 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:08:59.242321 kernel: intel_pstate: CPU model not supported Mar 7 01:08:59.242344 kernel: efifb: probing for efifb Mar 7 01:08:59.242361 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 7 01:08:59.242378 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 7 01:08:59.242394 kernel: efifb: scrolling: redraw Mar 7 01:08:59.242410 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:08:59.242426 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:08:59.242443 kernel: fb0: EFI VGA frame buffer device Mar 7 01:08:59.242459 kernel: pstore: Using crash dump compression: deflate Mar 7 01:08:59.242476 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:08:59.242495 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:08:59.242511 kernel: Segment Routing with IPv6 Mar 7 01:08:59.242527 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:08:59.242543 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:08:59.242559 kernel: Key type dns_resolver registered Mar 7 01:08:59.242576 kernel: IPI shorthand broadcast: enabled Mar 7 01:08:59.242619 kernel: sched_clock: Marking stable (539002021, 173644475)->(819350473, -106703977) Mar 7 01:08:59.242640 kernel: registered taskstats version 1 Mar 7 01:08:59.242657 kernel: Loading compiled-in X.509 certificates Mar 7 01:08:59.242676 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:08:59.242693 kernel: Key type .fscrypt registered Mar 7 01:08:59.242710 kernel: Key type fscrypt-provisioning registered Mar 7 01:08:59.242727 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:08:59.242745 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:08:59.242762 kernel: ima: No architecture policies found Mar 7 01:08:59.242779 kernel: clk: Disabling unused clocks Mar 7 01:08:59.242796 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:08:59.242813 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:08:59.242833 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:08:59.242850 kernel: Run /init as init process Mar 7 01:08:59.242867 kernel: with arguments: Mar 7 01:08:59.242883 kernel: /init Mar 7 01:08:59.242951 kernel: with environment: Mar 7 01:08:59.242968 kernel: HOME=/ Mar 7 01:08:59.242986 kernel: TERM=linux Mar 7 01:08:59.243006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:08:59.243030 systemd[1]: Detected virtualization amazon. Mar 7 01:08:59.243048 systemd[1]: Detected architecture x86-64. Mar 7 01:08:59.243069 systemd[1]: Running in initrd. Mar 7 01:08:59.243087 systemd[1]: No hostname configured, using default hostname. Mar 7 01:08:59.243104 systemd[1]: Hostname set to . Mar 7 01:08:59.243122 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:08:59.243140 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:08:59.243159 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:08:59.243180 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:08:59.243199 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:08:59.243217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:08:59.243237 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:08:59.243257 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:08:59.243279 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:08:59.243300 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:08:59.243318 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:08:59.243334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:08:59.243860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:08:59.247721 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:08:59.247738 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:08:59.247763 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:08:59.247778 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:08:59.247793 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:08:59.247810 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:08:59.247829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:08:59.247844 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:08:59.247859 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:08:59.247875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:08:59.247892 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:08:59.247951 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:08:59.247966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:08:59.247983 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:08:59.248003 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:08:59.248022 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:08:59.248041 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:08:59.248061 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:08:59.248080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:08:59.248145 systemd-journald[179]: Collecting audit messages is disabled. Mar 7 01:08:59.248191 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:08:59.248210 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:08:59.248230 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:08:59.248253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:08:59.248274 systemd-journald[179]: Journal started Mar 7 01:08:59.248314 systemd-journald[179]: Runtime Journal (/run/log/journal/ec20ac8353a61bf08e8f3587e2d1b354) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:08:59.247771 systemd-modules-load[180]: Inserted module 'overlay' Mar 7 01:08:59.283983 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:08:59.289119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:08:59.309932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:08:59.313948 kernel: Bridge firewalling registered Mar 7 01:08:59.313153 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 7 01:08:59.313223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:08:59.317156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:08:59.319990 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:08:59.322638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:08:59.334116 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:08:59.338091 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:08:59.341792 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:08:59.351101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:08:59.356282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:08:59.362186 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:08:59.374164 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:08:59.376107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:08:59.386630 dracut-cmdline[210]: dracut-dracut-053 Mar 7 01:08:59.389957 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:08:59.429977 systemd-resolved[217]: Positive Trust Anchors: Mar 7 01:08:59.430000 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:08:59.430061 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:08:59.436596 systemd-resolved[217]: Defaulting to hostname 'linux'. Mar 7 01:08:59.440232 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:08:59.442157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:08:59.484939 kernel: SCSI subsystem initialized Mar 7 01:08:59.494926 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:08:59.506943 kernel: iscsi: registered transport (tcp) Mar 7 01:08:59.529362 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:08:59.529442 kernel: QLogic iSCSI HBA Driver Mar 7 01:08:59.571212 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:08:59.576126 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:08:59.605478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:08:59.605578 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:08:59.605603 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:08:59.649946 kernel: raid6: avx512x4 gen() 17579 MB/s Mar 7 01:08:59.667928 kernel: raid6: avx512x2 gen() 17195 MB/s Mar 7 01:08:59.685935 kernel: raid6: avx512x1 gen() 17345 MB/s Mar 7 01:08:59.703921 kernel: raid6: avx2x4 gen() 17374 MB/s Mar 7 01:08:59.721931 kernel: raid6: avx2x2 gen() 16359 MB/s Mar 7 01:08:59.741217 kernel: raid6: avx2x1 gen() 13224 MB/s Mar 7 01:08:59.741287 kernel: raid6: using algorithm avx512x4 gen() 17579 MB/s Mar 7 01:08:59.761194 kernel: raid6: .... xor() 7543 MB/s, rmw enabled Mar 7 01:08:59.761277 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:08:59.784939 kernel: xor: automatically using best checksumming function avx Mar 7 01:08:59.951933 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:08:59.969580 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:08:59.979182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:08:59.998625 systemd-udevd[399]: Using default interface naming scheme 'v255'. Mar 7 01:09:00.008231 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:00.022464 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:09:00.122860 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Mar 7 01:09:00.204142 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:09:00.211323 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:09:00.328583 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:00.338356 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:09:00.366420 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:09:00.374311 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:09:00.376608 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:00.377680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:09:00.383191 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:09:00.413322 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:09:00.444928 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:09:00.455574 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 01:09:00.455935 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 01:09:00.465319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:09:00.466363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:00.468568 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:00.470954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:00.471166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:00.473002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:00.482923 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 7 01:09:00.485320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:00.494943 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:09:00.499938 kernel: AES CTR mode by8 optimization enabled Mar 7 01:09:00.502906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:00.504001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:00.517090 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:65:43:4e:fd:7d Mar 7 01:09:00.522918 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 01:09:00.524649 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 7 01:09:00.523329 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:00.541935 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 01:09:00.552248 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:09:00.553223 kernel: GPT:9289727 != 33554431 Mar 7 01:09:00.553246 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:09:00.559271 kernel: GPT:9289727 != 33554431 Mar 7 01:09:00.559425 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:09:00.559447 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:00.565447 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:09:00.580405 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:00.589196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:00.616050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:00.802953 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (454) Mar 7 01:09:00.904928 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (457) Mar 7 01:09:01.118953 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 01:09:01.147316 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 01:09:01.182754 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 01:09:01.183458 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 01:09:01.217073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:09:01.241187 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:09:01.345239 disk-uuid[632]: Primary Header is updated. Mar 7 01:09:01.345239 disk-uuid[632]: Secondary Entries is updated. Mar 7 01:09:01.345239 disk-uuid[632]: Secondary Header is updated. Mar 7 01:09:01.354021 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:01.382115 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:01.395935 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:02.392008 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:09:02.392400 disk-uuid[633]: The operation has completed successfully. Mar 7 01:09:02.554042 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:09:02.554178 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:09:02.578163 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:09:02.582247 sh[978]: Success Mar 7 01:09:02.609466 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:09:02.759299 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:09:02.777275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:09:02.784052 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:09:02.807330 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:09:02.809132 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:02.809171 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:09:02.813875 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:09:02.813976 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:09:02.923956 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:09:02.950232 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:09:02.952237 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:09:02.960484 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:09:02.964168 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:09:02.997927 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:03.003226 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:03.003293 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:09:03.010940 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:09:03.030618 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:03.030029 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:09:03.040663 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:09:03.047284 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:09:03.108776 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:09:03.114120 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:09:03.139432 systemd-networkd[1170]: lo: Link UP Mar 7 01:09:03.139450 systemd-networkd[1170]: lo: Gained carrier Mar 7 01:09:03.141704 systemd-networkd[1170]: Enumeration completed Mar 7 01:09:03.142002 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:09:03.142458 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:03.142463 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:09:03.144599 systemd[1]: Reached target network.target - Network. Mar 7 01:09:03.145091 systemd-networkd[1170]: eth0: Link UP Mar 7 01:09:03.145096 systemd-networkd[1170]: eth0: Gained carrier Mar 7 01:09:03.145108 systemd-networkd[1170]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:03.156046 systemd-networkd[1170]: eth0: DHCPv4 address 172.31.31.131/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:09:03.377108 ignition[1088]: Ignition 2.19.0 Mar 7 01:09:03.377126 ignition[1088]: Stage: fetch-offline Mar 7 01:09:03.377378 ignition[1088]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:03.377392 ignition[1088]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:03.378020 ignition[1088]: Ignition finished successfully Mar 7 01:09:03.379654 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:09:03.385125 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:09:03.401663 ignition[1179]: Ignition 2.19.0 Mar 7 01:09:03.401682 ignition[1179]: Stage: fetch Mar 7 01:09:03.402154 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:03.402170 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:03.402289 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:03.411689 ignition[1179]: PUT result: OK Mar 7 01:09:03.414290 ignition[1179]: parsed url from cmdline: "" Mar 7 01:09:03.414301 ignition[1179]: no config URL provided Mar 7 01:09:03.414312 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:09:03.414334 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:09:03.414361 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:03.414990 ignition[1179]: PUT result: OK Mar 7 01:09:03.415066 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 01:09:03.415626 ignition[1179]: GET result: OK Mar 7 01:09:03.415763 ignition[1179]: parsing config with SHA512: 7d89da90fb6814ec7c0b41a3f66fc262fb3fd6ffa34594bdd1728cb830705bba650891ade2649ba509027b5745a52cc15ba8550983b29338c5870572d5076f1d Mar 7 01:09:03.421846 unknown[1179]: fetched base config from "system" Mar 7 01:09:03.421859 unknown[1179]: fetched base config from "system" Mar 7 01:09:03.421866 unknown[1179]: fetched user config from "aws" Mar 7 01:09:03.422737 ignition[1179]: fetch: fetch complete Mar 7 01:09:03.422745 ignition[1179]: fetch: fetch passed Mar 7 01:09:03.422806 ignition[1179]: Ignition finished successfully Mar 7 01:09:03.425375 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:09:03.432259 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:09:03.454635 ignition[1185]: Ignition 2.19.0 Mar 7 01:09:03.454654 ignition[1185]: Stage: kargs Mar 7 01:09:03.455177 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:03.455192 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:03.455316 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:03.456149 ignition[1185]: PUT result: OK Mar 7 01:09:03.459108 ignition[1185]: kargs: kargs passed Mar 7 01:09:03.459190 ignition[1185]: Ignition finished successfully Mar 7 01:09:03.461133 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:09:03.466106 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:09:03.482821 ignition[1191]: Ignition 2.19.0 Mar 7 01:09:03.482838 ignition[1191]: Stage: disks Mar 7 01:09:03.483349 ignition[1191]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:03.483363 ignition[1191]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:03.483492 ignition[1191]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:03.484423 ignition[1191]: PUT result: OK Mar 7 01:09:03.487629 ignition[1191]: disks: disks passed Mar 7 01:09:03.487710 ignition[1191]: Ignition finished successfully Mar 7 01:09:03.489690 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:09:03.490417 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:09:03.490782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:09:03.491362 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:09:03.491935 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:09:03.492534 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:09:03.499156 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:09:03.538987 systemd-fsck[1199]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:09:03.546231 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:09:03.553239 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:09:03.660923 kernel: EXT4-fs (nvme0n1p9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:09:03.662074 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:09:03.663281 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:09:03.675057 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:09:03.679203 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:09:03.680581 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:09:03.680758 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:09:03.680800 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:09:03.695603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:09:03.701935 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1218) Mar 7 01:09:03.712866 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:03.712967 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:03.712988 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:09:03.713024 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:09:03.727042 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:09:03.729176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:09:04.091229 initrd-setup-root[1242]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:09:04.108182 initrd-setup-root[1249]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:09:04.114803 initrd-setup-root[1256]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:09:04.120935 initrd-setup-root[1263]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:09:04.345135 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:09:04.352052 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:09:04.355095 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:09:04.367973 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:04.368258 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:09:04.398855 ignition[1331]: INFO : Ignition 2.19.0 Mar 7 01:09:04.401636 ignition[1331]: INFO : Stage: mount Mar 7 01:09:04.401636 ignition[1331]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:04.401636 ignition[1331]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:04.401636 ignition[1331]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:04.406208 ignition[1331]: INFO : PUT result: OK Mar 7 01:09:04.410395 ignition[1331]: INFO : mount: mount passed Mar 7 01:09:04.411101 ignition[1331]: INFO : Ignition finished successfully Mar 7 01:09:04.413605 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:09:04.414426 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:09:04.421074 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:09:04.438187 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:09:04.451083 systemd-networkd[1170]: eth0: Gained IPv6LL Mar 7 01:09:04.483205 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1342) Mar 7 01:09:04.500525 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:04.503013 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:04.503045 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:09:04.519930 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:09:04.522810 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:09:04.550266 ignition[1359]: INFO : Ignition 2.19.0 Mar 7 01:09:04.550266 ignition[1359]: INFO : Stage: files Mar 7 01:09:04.551808 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:04.551808 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:04.551808 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:04.551808 ignition[1359]: INFO : PUT result: OK Mar 7 01:09:04.555132 ignition[1359]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:09:04.555787 ignition[1359]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:09:04.555787 ignition[1359]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:09:04.578645 ignition[1359]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:09:04.580026 ignition[1359]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:09:04.580026 ignition[1359]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:09:04.579404 unknown[1359]: wrote ssh authorized keys file for user: core Mar 7 01:09:04.591286 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:09:04.592362 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:09:04.679621 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:09:04.970206 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:09:04.970206 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:09:04.972198 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:09:05.224187 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:09:05.476734 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:09:05.476734 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:09:05.476734 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:09:05.476734 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:09:05.483125 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 01:09:05.956777 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:09:07.760050 ignition[1359]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 01:09:07.760050 ignition[1359]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:09:07.763268 ignition[1359]: INFO : files: files passed Mar 7 01:09:07.763268 ignition[1359]: INFO : Ignition finished successfully Mar 7 01:09:07.764331 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:09:07.773195 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:09:07.778097 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:09:07.779505 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:09:07.779635 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:09:07.805367 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:09:07.805367 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:09:07.809355 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:09:07.811677 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:09:07.812524 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:09:07.819162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:09:07.848232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:09:07.848381 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:09:07.849856 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:09:07.851114 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:09:07.852021 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:09:07.858636 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:09:07.872688 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:09:07.878126 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:09:07.893451 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:09:07.894623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:07.895431 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:09:07.896292 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:09:07.896477 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:09:07.897788 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:09:07.898676 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:09:07.899484 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:09:07.900298 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:09:07.901113 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:09:07.902023 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:09:07.902815 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:09:07.903647 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:09:07.904858 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:09:07.905750 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:09:07.906512 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:09:07.906694 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:09:07.907824 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:09:07.908673 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:09:07.909395 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:09:07.909700 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:09:07.910362 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:09:07.910581 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:09:07.911980 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:09:07.912167 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:09:07.912892 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:09:07.913070 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:09:07.920294 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:09:07.925252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:09:07.926738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:09:07.927577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:07.933213 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:09:07.934334 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:09:07.940755 ignition[1413]: INFO : Ignition 2.19.0 Mar 7 01:09:07.940755 ignition[1413]: INFO : Stage: umount Mar 7 01:09:07.949858 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:07.949858 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:09:07.949858 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:09:07.949858 ignition[1413]: INFO : PUT result: OK Mar 7 01:09:07.949858 ignition[1413]: INFO : umount: umount passed Mar 7 01:09:07.949858 ignition[1413]: INFO : Ignition finished successfully Mar 7 01:09:07.947451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:09:07.947605 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:09:07.949475 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:09:07.949606 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:09:07.955425 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:09:07.955499 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:09:07.957770 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:09:07.957842 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:09:07.958408 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:09:07.958465 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:09:07.961018 systemd[1]: Stopped target network.target - Network. Mar 7 01:09:07.961629 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:09:07.961698 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:09:07.962234 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:09:07.962659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:09:07.962731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:09:07.964200 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:09:07.964649 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:09:07.966019 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:09:07.966077 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:09:07.966734 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:09:07.966783 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:09:07.967259 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:09:07.967341 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:09:07.967788 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:09:07.967845 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:09:07.968642 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:09:07.969322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:09:07.971503 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:09:07.972991 systemd-networkd[1170]: eth0: DHCPv6 lease lost Mar 7 01:09:07.975748 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:09:07.978437 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:09:07.980446 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:09:07.980837 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:09:07.984337 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:09:07.984490 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:09:07.986524 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:09:07.986598 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:09:07.987332 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:09:07.987403 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:09:07.994066 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:09:07.994641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:09:07.994724 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:09:07.995545 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:09:07.995613 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:09:07.996231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:09:07.996287 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:09:07.996994 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:09:07.997050 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:09:07.997958 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:09:08.014572 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:09:08.014791 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:08.016117 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:09:08.016256 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:09:08.017793 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:09:08.017878 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:09:08.018763 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:09:08.018809 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:09:08.019579 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:09:08.019643 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:09:08.020799 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:09:08.020858 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:09:08.022132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:09:08.022194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:08.029184 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:09:08.029919 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:09:08.030008 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:09:08.030789 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:09:08.030853 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:09:08.034045 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:09:08.034113 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:09:08.034747 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:08.034808 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:08.039592 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:09:08.039718 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:09:08.040514 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:09:08.048139 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:09:08.057050 systemd[1]: Switching root. Mar 7 01:09:08.084103 systemd-journald[179]: Journal stopped Mar 7 01:09:09.947048 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 7 01:09:09.947166 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:09:09.947190 kernel: SELinux: policy capability open_perms=1 Mar 7 01:09:09.947217 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:09:09.947237 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:09:09.947256 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:09:09.947276 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:09:09.947296 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:09:09.947487 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:09:09.947513 kernel: audit: type=1403 audit(1772845748.607:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:09:09.947535 systemd[1]: Successfully loaded SELinux policy in 67.222ms. Mar 7 01:09:09.947573 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.382ms. Mar 7 01:09:09.947596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:09:09.947619 systemd[1]: Detected virtualization amazon. Mar 7 01:09:09.947641 systemd[1]: Detected architecture x86-64. Mar 7 01:09:09.947662 systemd[1]: Detected first boot. Mar 7 01:09:09.947684 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:09:09.947707 zram_generator::config[1456]: No configuration found. Mar 7 01:09:09.947738 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:09:09.947759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:09:09.947780 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:09:09.947801 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:09:09.947823 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:09:09.947845 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:09:09.947866 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:09:09.947887 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:09:09.950047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:09:09.950076 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:09:09.950096 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:09:09.950117 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:09:09.950137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:09:09.950157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:09:09.950176 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:09:09.950197 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:09:09.950223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:09:09.950249 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:09:09.950269 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:09:09.950290 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:09:09.950309 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:09:09.950328 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:09:09.950349 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:09:09.950370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:09:09.950398 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:09.950509 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:09:09.950603 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:09:09.950623 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:09:09.950644 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:09:09.950667 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:09:09.950690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:09:09.950713 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:09:09.950734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:09:09.950761 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:09:09.950783 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:09:09.950805 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:09:09.950825 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:09:09.950845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:09.950866 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:09:09.950886 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:09:09.951134 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:09:09.951166 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:09:09.951196 systemd[1]: Reached target machines.target - Containers. Mar 7 01:09:09.951217 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:09:09.951240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:09:09.951263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:09:09.951286 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:09:09.951309 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:09:09.951332 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:09:09.951354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:09:09.951378 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:09:09.951399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:09:09.951418 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:09:09.951437 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:09:09.951456 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:09:09.951476 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:09:09.951499 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:09:09.951521 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:09:09.951544 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:09:09.951572 kernel: loop: module loaded Mar 7 01:09:09.951596 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:09:09.951617 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:09:09.951637 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:09:09.951661 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:09:09.951684 systemd[1]: Stopped verity-setup.service. Mar 7 01:09:09.951708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:09.951733 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:09:09.951758 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:09:09.951788 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:09:09.951813 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:09:09.951837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:09:09.951863 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:09:09.951891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:09:09.951973 kernel: fuse: init (API version 7.39) Mar 7 01:09:09.954018 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:09:09.954049 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:09:09.954069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:09:09.954090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:09:09.954111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:09:09.954138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:09:09.954159 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:09:09.954180 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:09:09.954201 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:09:09.954222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:09:09.954244 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:09:09.954266 kernel: ACPI: bus type drm_connector registered Mar 7 01:09:09.954285 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:09:09.954310 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:09:09.954371 systemd-journald[1538]: Collecting audit messages is disabled. Mar 7 01:09:09.954414 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:09:09.954437 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:09:09.954459 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:09:09.954482 systemd-journald[1538]: Journal started Mar 7 01:09:09.954525 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec20ac8353a61bf08e8f3587e2d1b354) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:09:09.961763 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:09:09.460713 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:09:09.516436 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 01:09:09.516918 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:09:09.978098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:09:09.978201 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:09:09.981989 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:09:09.987022 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:09:09.996016 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:09:10.006012 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:09:10.013100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:09:10.028221 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:09:10.028314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:09:10.041561 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:09:10.044923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:09:10.056931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:09:10.070479 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:09:10.081921 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:09:10.093033 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:09:10.097283 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:09:10.099403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:10.100709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:09:10.107987 kernel: loop0: detected capacity change from 0 to 217752 Mar 7 01:09:10.105307 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:09:10.106448 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:09:10.107518 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:09:10.145216 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:09:10.155534 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:09:10.167297 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Mar 7 01:09:10.167324 systemd-tmpfiles[1568]: ACLs are not supported, ignoring. Mar 7 01:09:10.167489 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:09:10.178028 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:09:10.187555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:09:10.209637 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:09:10.215510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:09:10.224093 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec20ac8353a61bf08e8f3587e2d1b354 is 63.725ms for 999 entries. Mar 7 01:09:10.224093 systemd-journald[1538]: System Journal (/var/log/journal/ec20ac8353a61bf08e8f3587e2d1b354) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:09:10.295363 systemd-journald[1538]: Received client request to flush runtime journal. Mar 7 01:09:10.228233 udevadm[1595]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:09:10.233074 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:09:10.238339 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:09:10.298343 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:09:10.317459 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:09:10.330618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:09:10.359951 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Mar 7 01:09:10.360334 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Mar 7 01:09:10.367639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:09:10.449938 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:09:10.488932 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 01:09:10.646025 kernel: loop2: detected capacity change from 0 to 140768 Mar 7 01:09:10.770983 kernel: loop3: detected capacity change from 0 to 61336 Mar 7 01:09:10.811332 kernel: loop4: detected capacity change from 0 to 217752 Mar 7 01:09:10.847092 kernel: loop5: detected capacity change from 0 to 142488 Mar 7 01:09:10.887980 kernel: loop6: detected capacity change from 0 to 140768 Mar 7 01:09:10.914941 kernel: loop7: detected capacity change from 0 to 61336 Mar 7 01:09:10.945874 (sd-merge)[1615]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 01:09:10.947943 (sd-merge)[1615]: Merged extensions into '/usr'. Mar 7 01:09:10.952458 systemd[1]: Reloading requested from client PID 1567 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:09:10.952476 systemd[1]: Reloading... Mar 7 01:09:11.070948 zram_generator::config[1641]: No configuration found. Mar 7 01:09:11.232653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:11.291259 systemd[1]: Reloading finished in 337 ms. Mar 7 01:09:11.319013 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:09:11.319809 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:09:11.334234 systemd[1]: Starting ensure-sysext.service... Mar 7 01:09:11.337138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:09:11.343134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:09:11.376388 systemd[1]: Reloading requested from client PID 1693 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:09:11.376405 systemd[1]: Reloading... Mar 7 01:09:11.396116 systemd-udevd[1695]: Using default interface naming scheme 'v255'. Mar 7 01:09:11.404472 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:09:11.405581 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:09:11.406756 systemd-tmpfiles[1694]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:09:11.407123 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Mar 7 01:09:11.407232 systemd-tmpfiles[1694]: ACLs are not supported, ignoring. Mar 7 01:09:11.412701 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:09:11.412717 systemd-tmpfiles[1694]: Skipping /boot Mar 7 01:09:11.448381 systemd-tmpfiles[1694]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:09:11.448397 systemd-tmpfiles[1694]: Skipping /boot Mar 7 01:09:11.492922 zram_generator::config[1723]: No configuration found. Mar 7 01:09:11.624127 (udev-worker)[1737]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:09:11.768928 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:09:11.774927 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 7 01:09:11.784920 ldconfig[1563]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:09:11.813927 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:09:11.823893 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:11.831927 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 7 01:09:11.832057 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:09:11.835934 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:09:11.928922 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1746) Mar 7 01:09:11.959034 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:09:11.979140 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:09:11.980819 systemd[1]: Reloading finished in 603 ms. Mar 7 01:09:12.003933 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:12.005095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:09:12.006402 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:09:12.068734 systemd[1]: Finished ensure-sysext.service. Mar 7 01:09:12.092327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:12.099182 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:09:12.105764 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:09:12.107008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:09:12.109490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:09:12.116106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:09:12.119275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:09:12.122282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:09:12.123070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:09:12.127559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:09:12.138843 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:09:12.152204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:09:12.153000 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:09:12.161473 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:09:12.165136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:12.166488 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:09:12.210114 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:09:12.211783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:09:12.242392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:09:12.242614 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:09:12.248550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:09:12.249671 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:09:12.253663 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:09:12.254118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:09:12.255499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:09:12.259476 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:09:12.260211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:09:12.264927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:09:12.265046 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:09:12.274123 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:09:12.302141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:09:12.314185 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:09:12.315135 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:09:12.316705 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:09:12.322455 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:09:12.332181 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:09:12.332827 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:09:12.339492 augenrules[1926]: No rules Mar 7 01:09:12.342418 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:09:12.360043 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:09:12.372614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:09:12.376216 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:09:12.395303 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:09:12.399159 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:09:12.408119 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:09:12.430110 lvm[1943]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:09:12.456973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:12.481352 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:09:12.502635 systemd-networkd[1894]: lo: Link UP Mar 7 01:09:12.502655 systemd-networkd[1894]: lo: Gained carrier Mar 7 01:09:12.503555 systemd-resolved[1898]: Positive Trust Anchors: Mar 7 01:09:12.503572 systemd-resolved[1898]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:09:12.503642 systemd-resolved[1898]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:09:12.504534 systemd-networkd[1894]: Enumeration completed Mar 7 01:09:12.504687 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:09:12.506935 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:12.506944 systemd-networkd[1894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:09:12.512308 systemd-networkd[1894]: eth0: Link UP Mar 7 01:09:12.512494 systemd-networkd[1894]: eth0: Gained carrier Mar 7 01:09:12.512525 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:12.514315 systemd-resolved[1898]: Defaulting to hostname 'linux'. Mar 7 01:09:12.515209 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:09:12.519960 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:09:12.520721 systemd[1]: Reached target network.target - Network. Mar 7 01:09:12.521637 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:09:12.522413 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:09:12.523176 systemd-networkd[1894]: eth0: DHCPv4 address 172.31.31.131/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:09:12.523448 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:09:12.525087 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:09:12.526219 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:09:12.526722 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:09:12.527171 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:09:12.527565 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:09:12.527622 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:09:12.528048 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:09:12.528924 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:09:12.530723 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:09:12.544209 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:09:12.545630 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:09:12.546268 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:09:12.546744 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:09:12.547258 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:09:12.547302 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:09:12.548480 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:09:12.553130 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:09:12.561378 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:09:12.565986 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:09:12.572865 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:09:12.573685 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:09:12.576574 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:09:12.580319 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:09:12.589987 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:09:12.593189 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:09:12.617147 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:09:12.622885 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:09:12.637203 jq[1954]: false Mar 7 01:09:12.639041 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:09:12.640798 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:09:12.642314 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:09:12.644118 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:09:12.650072 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:09:12.655434 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:09:12.656007 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:09:12.662506 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:09:12.662969 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:09:12.765923 update_engine[1965]: I20260307 01:09:12.756678 1965 main.cc:92] Flatcar Update Engine starting Mar 7 01:09:12.759678 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:09:12.759459 dbus-daemon[1953]: [system] SELinux support is enabled Mar 7 01:09:12.769076 dbus-daemon[1953]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1894 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:09:12.772786 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:09:12.774751 jq[1966]: true Mar 7 01:09:12.772830 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:09:12.773742 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:09:12.773770 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:09:12.777589 update_engine[1965]: I20260307 01:09:12.777172 1965 update_check_scheduler.cc:74] Next update check in 7m50s Mar 7 01:09:12.777703 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:09:12.783278 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: ---------------------------------------------------- Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: corporation. Support and training for ntp-4 are Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: available at https://www.nwtime.org/support Mar 7 01:09:12.798689 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: ---------------------------------------------------- Mar 7 01:09:12.796992 ntpd[1957]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:09:12.797018 ntpd[1957]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:09:12.797029 ntpd[1957]: ---------------------------------------------------- Mar 7 01:09:12.797040 ntpd[1957]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:09:12.797050 ntpd[1957]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:09:12.797060 ntpd[1957]: corporation. Support and training for ntp-4 are Mar 7 01:09:12.797069 ntpd[1957]: available at https://www.nwtime.org/support Mar 7 01:09:12.797078 ntpd[1957]: ---------------------------------------------------- Mar 7 01:09:12.800950 (ntainerd)[1984]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:09:12.818646 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: proto: precision = 0.064 usec (-24) Mar 7 01:09:12.813545 ntpd[1957]: proto: precision = 0.064 usec (-24) Mar 7 01:09:12.804592 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:09:12.818259 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:09:12.821118 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:09:12.827691 tar[1968]: linux-amd64/LICENSE Mar 7 01:09:12.827691 tar[1968]: linux-amd64/helm Mar 7 01:09:12.828041 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: basedate set to 2026-02-22 Mar 7 01:09:12.828041 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: gps base set to 2026-02-22 (week 2407) Mar 7 01:09:12.824185 ntpd[1957]: basedate set to 2026-02-22 Mar 7 01:09:12.824208 ntpd[1957]: gps base set to 2026-02-22 (week 2407) Mar 7 01:09:12.835856 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:09:12.837024 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:09:12.838696 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Listen normally on 3 eth0 172.31.31.131:123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Listen normally on 4 lo [::1]:123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: bind(21) AF_INET6 fe80::465:43ff:fe4e:fd7d%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: unable to create socket on eth0 (5) for fe80::465:43ff:fe4e:fd7d%2#123 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: failed to init interface for address fe80::465:43ff:fe4e:fd7d%2 Mar 7 01:09:12.840042 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Mar 7 01:09:12.838767 ntpd[1957]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:09:12.838986 ntpd[1957]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:09:12.839032 ntpd[1957]: Listen normally on 3 eth0 172.31.31.131:123 Mar 7 01:09:12.839082 ntpd[1957]: Listen normally on 4 lo [::1]:123 Mar 7 01:09:12.839134 ntpd[1957]: bind(21) AF_INET6 fe80::465:43ff:fe4e:fd7d%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:12.839156 ntpd[1957]: unable to create socket on eth0 (5) for fe80::465:43ff:fe4e:fd7d%2#123 Mar 7 01:09:12.839172 ntpd[1957]: failed to init interface for address fe80::465:43ff:fe4e:fd7d%2 Mar 7 01:09:12.839207 ntpd[1957]: Listening on routing socket on fd #21 for interface updates Mar 7 01:09:12.844714 jq[1991]: true Mar 7 01:09:12.857745 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:12.857745 ntpd[1957]: 7 Mar 01:09:12 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:12.849757 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:12.857884 extend-filesystems[1955]: Found loop4 Mar 7 01:09:12.857884 extend-filesystems[1955]: Found loop5 Mar 7 01:09:12.857884 extend-filesystems[1955]: Found loop6 Mar 7 01:09:12.857884 extend-filesystems[1955]: Found loop7 Mar 7 01:09:12.857884 extend-filesystems[1955]: Found nvme0n1 Mar 7 01:09:12.857884 extend-filesystems[1955]: Found nvme0n1p1 Mar 7 01:09:12.857884 extend-filesystems[1955]: Found nvme0n1p2 Mar 7 01:09:12.849803 ntpd[1957]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:09:12.889058 extend-filesystems[1955]: Found nvme0n1p3 Mar 7 01:09:12.889058 extend-filesystems[1955]: Found usr Mar 7 01:09:12.889058 extend-filesystems[1955]: Found nvme0n1p4 Mar 7 01:09:12.889058 extend-filesystems[1955]: Found nvme0n1p6 Mar 7 01:09:12.889058 extend-filesystems[1955]: Found nvme0n1p7 Mar 7 01:09:12.889058 extend-filesystems[1955]: Found nvme0n1p9 Mar 7 01:09:12.889058 extend-filesystems[1955]: Checking size of /dev/nvme0n1p9 Mar 7 01:09:12.916067 coreos-metadata[1952]: Mar 07 01:09:12.913 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:09:12.916067 coreos-metadata[1952]: Mar 07 01:09:12.915 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 01:09:12.919373 coreos-metadata[1952]: Mar 07 01:09:12.917 INFO Fetch successful Mar 7 01:09:12.919373 coreos-metadata[1952]: Mar 07 01:09:12.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 01:09:12.932333 coreos-metadata[1952]: Mar 07 01:09:12.927 INFO Fetch successful Mar 7 01:09:12.932333 coreos-metadata[1952]: Mar 07 01:09:12.928 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 01:09:12.932333 coreos-metadata[1952]: Mar 07 01:09:12.929 INFO Fetch successful Mar 7 01:09:12.932333 coreos-metadata[1952]: Mar 07 01:09:12.929 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 01:09:12.932333 coreos-metadata[1952]: Mar 07 01:09:12.930 INFO Fetch successful Mar 7 01:09:12.932333 coreos-metadata[1952]: Mar 07 01:09:12.930 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 01:09:12.933075 coreos-metadata[1952]: Mar 07 01:09:12.933 INFO Fetch failed with 404: resource not found Mar 7 01:09:12.933075 coreos-metadata[1952]: Mar 07 01:09:12.933 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 01:09:12.936626 coreos-metadata[1952]: Mar 07 01:09:12.934 INFO Fetch successful Mar 7 01:09:12.936626 coreos-metadata[1952]: Mar 07 01:09:12.934 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 01:09:12.936626 coreos-metadata[1952]: Mar 07 01:09:12.934 INFO Fetch successful Mar 7 01:09:12.936626 coreos-metadata[1952]: Mar 07 01:09:12.934 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 01:09:12.936626 coreos-metadata[1952]: Mar 07 01:09:12.935 INFO Fetch successful Mar 7 01:09:12.936626 coreos-metadata[1952]: Mar 07 01:09:12.935 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 01:09:12.939980 coreos-metadata[1952]: Mar 07 01:09:12.937 INFO Fetch successful Mar 7 01:09:12.939980 coreos-metadata[1952]: Mar 07 01:09:12.937 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 01:09:12.939980 coreos-metadata[1952]: Mar 07 01:09:12.938 INFO Fetch successful Mar 7 01:09:12.940851 extend-filesystems[1955]: Resized partition /dev/nvme0n1p9 Mar 7 01:09:12.945355 systemd-logind[1964]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:09:12.954141 extend-filesystems[2018]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:09:12.958837 systemd-logind[1964]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 7 01:09:12.958874 systemd-logind[1964]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:09:12.967278 systemd-logind[1964]: New seat seat0. Mar 7 01:09:12.976575 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:09:12.999456 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 01:09:13.140310 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1737) Mar 7 01:09:13.119208 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:09:13.072610 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:09:13.139592 dbus-daemon[1953]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1996 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:09:13.073850 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:09:13.131847 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:09:13.160003 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:09:13.176921 bash[2023]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:09:13.170973 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:09:13.187285 systemd[1]: Starting sshkeys.service... Mar 7 01:09:13.216956 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 01:09:13.228239 locksmithd[1997]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:09:13.240008 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:09:13.255497 polkitd[2050]: Started polkitd version 121 Mar 7 01:09:13.258567 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:09:13.270481 extend-filesystems[2018]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 01:09:13.270481 extend-filesystems[2018]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:09:13.270481 extend-filesystems[2018]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 01:09:13.264317 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:09:13.281678 extend-filesystems[1955]: Resized filesystem in /dev/nvme0n1p9 Mar 7 01:09:13.264553 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:09:13.295184 polkitd[2050]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:09:13.296747 polkitd[2050]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:09:13.301474 polkitd[2050]: Finished loading, compiling and executing 2 rules Mar 7 01:09:13.305651 dbus-daemon[1953]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:09:13.305977 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:09:13.307840 polkitd[2050]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:09:13.378973 systemd-hostnamed[1996]: Hostname set to (transient) Mar 7 01:09:13.379226 systemd-resolved[1898]: System hostname changed to 'ip-172-31-31-131'. Mar 7 01:09:13.552146 coreos-metadata[2068]: Mar 07 01:09:13.551 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:09:13.555416 coreos-metadata[2068]: Mar 07 01:09:13.552 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 01:09:13.555416 coreos-metadata[2068]: Mar 07 01:09:13.553 INFO Fetch successful Mar 7 01:09:13.555416 coreos-metadata[2068]: Mar 07 01:09:13.553 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 01:09:13.555416 coreos-metadata[2068]: Mar 07 01:09:13.554 INFO Fetch successful Mar 7 01:09:13.557089 unknown[2068]: wrote ssh authorized keys file for user: core Mar 7 01:09:13.629927 update-ssh-keys[2138]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:09:13.636973 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:09:13.652253 systemd[1]: Finished sshkeys.service. Mar 7 01:09:13.712405 containerd[1984]: time="2026-03-07T01:09:13.711965024Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:09:13.754056 sshd_keygen[1994]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:09:13.792859 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:09:13.797485 ntpd[1957]: bind(24) AF_INET6 fe80::465:43ff:fe4e:fd7d%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:13.797529 ntpd[1957]: unable to create socket on eth0 (6) for fe80::465:43ff:fe4e:fd7d%2#123 Mar 7 01:09:13.799100 ntpd[1957]: 7 Mar 01:09:13 ntpd[1957]: bind(24) AF_INET6 fe80::465:43ff:fe4e:fd7d%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:09:13.799100 ntpd[1957]: 7 Mar 01:09:13 ntpd[1957]: unable to create socket on eth0 (6) for fe80::465:43ff:fe4e:fd7d%2#123 Mar 7 01:09:13.799100 ntpd[1957]: 7 Mar 01:09:13 ntpd[1957]: failed to init interface for address fe80::465:43ff:fe4e:fd7d%2 Mar 7 01:09:13.797546 ntpd[1957]: failed to init interface for address fe80::465:43ff:fe4e:fd7d%2 Mar 7 01:09:13.801302 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:09:13.809557 containerd[1984]: time="2026-03-07T01:09:13.809504890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813187 containerd[1984]: time="2026-03-07T01:09:13.813133659Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813187 containerd[1984]: time="2026-03-07T01:09:13.813187375Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:09:13.813309 containerd[1984]: time="2026-03-07T01:09:13.813211026Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:09:13.813440 containerd[1984]: time="2026-03-07T01:09:13.813417820Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:09:13.813490 containerd[1984]: time="2026-03-07T01:09:13.813447926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813545 containerd[1984]: time="2026-03-07T01:09:13.813525904Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813584 containerd[1984]: time="2026-03-07T01:09:13.813545208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813817 containerd[1984]: time="2026-03-07T01:09:13.813788399Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813863 containerd[1984]: time="2026-03-07T01:09:13.813819520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813863 containerd[1984]: time="2026-03-07T01:09:13.813841568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:13.813863 containerd[1984]: time="2026-03-07T01:09:13.813857882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.814043 containerd[1984]: time="2026-03-07T01:09:13.814011698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.814288 containerd[1984]: time="2026-03-07T01:09:13.814252031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:09:13.815421 containerd[1984]: time="2026-03-07T01:09:13.815329088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:09:13.815503 containerd[1984]: time="2026-03-07T01:09:13.815422772Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:09:13.815580 containerd[1984]: time="2026-03-07T01:09:13.815560089Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:09:13.815651 containerd[1984]: time="2026-03-07T01:09:13.815634337Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:09:13.823313 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:09:13.823596 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824061502Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824133899Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824157501Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824177673Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824198506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824385282Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824726619Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824880152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:09:13.824945 containerd[1984]: time="2026-03-07T01:09:13.824922081Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:09:13.825572 containerd[1984]: time="2026-03-07T01:09:13.825546727Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:09:13.825707 containerd[1984]: time="2026-03-07T01:09:13.825687522Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.825807 containerd[1984]: time="2026-03-07T01:09:13.825789153Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.825938 containerd[1984]: time="2026-03-07T01:09:13.825886701Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.826039 containerd[1984]: time="2026-03-07T01:09:13.826024396Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.826141 containerd[1984]: time="2026-03-07T01:09:13.826123057Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.826235 containerd[1984]: time="2026-03-07T01:09:13.826220285Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.826390 containerd[1984]: time="2026-03-07T01:09:13.826303028Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.826390 containerd[1984]: time="2026-03-07T01:09:13.826325142Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:09:13.826547 containerd[1984]: time="2026-03-07T01:09:13.826482664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.826547 containerd[1984]: time="2026-03-07T01:09:13.826512801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828035 containerd[1984]: time="2026-03-07T01:09:13.826532808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828145582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828173601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828215300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828236906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828258213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828301823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828330408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828368778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828392727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828414100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828463523Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828501989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828540334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.828600 containerd[1984]: time="2026-03-07T01:09:13.828557954Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:09:13.829406 containerd[1984]: time="2026-03-07T01:09:13.829257598Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:09:13.829406 containerd[1984]: time="2026-03-07T01:09:13.829296174Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:09:13.829406 containerd[1984]: time="2026-03-07T01:09:13.829331664Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:09:13.829406 containerd[1984]: time="2026-03-07T01:09:13.829351375Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:09:13.829406 containerd[1984]: time="2026-03-07T01:09:13.829365680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.829803 containerd[1984]: time="2026-03-07T01:09:13.829656560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:09:13.829803 containerd[1984]: time="2026-03-07T01:09:13.829684219Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:09:13.829803 containerd[1984]: time="2026-03-07T01:09:13.829704303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:09:13.831152 containerd[1984]: time="2026-03-07T01:09:13.831040620Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:09:13.831632 containerd[1984]: time="2026-03-07T01:09:13.831430762Z" level=info msg="Connect containerd service" Mar 7 01:09:13.831632 containerd[1984]: time="2026-03-07T01:09:13.831512487Z" level=info msg="using legacy CRI server" Mar 7 01:09:13.831632 containerd[1984]: time="2026-03-07T01:09:13.831524890Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:09:13.832789 containerd[1984]: time="2026-03-07T01:09:13.831854747Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:09:13.834200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:09:13.835014 containerd[1984]: time="2026-03-07T01:09:13.834296656Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:09:13.835515 containerd[1984]: time="2026-03-07T01:09:13.835225267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:09:13.835515 containerd[1984]: time="2026-03-07T01:09:13.835303479Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:09:13.835515 containerd[1984]: time="2026-03-07T01:09:13.835360769Z" level=info msg="Start subscribing containerd event" Mar 7 01:09:13.835515 containerd[1984]: time="2026-03-07T01:09:13.835421346Z" level=info msg="Start recovering state" Mar 7 01:09:13.836386 containerd[1984]: time="2026-03-07T01:09:13.835519719Z" level=info msg="Start event monitor" Mar 7 01:09:13.836386 containerd[1984]: time="2026-03-07T01:09:13.835548641Z" level=info msg="Start snapshots syncer" Mar 7 01:09:13.836386 containerd[1984]: time="2026-03-07T01:09:13.835563721Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:09:13.836386 containerd[1984]: time="2026-03-07T01:09:13.835576605Z" level=info msg="Start streaming server" Mar 7 01:09:13.836386 containerd[1984]: time="2026-03-07T01:09:13.835664903Z" level=info msg="containerd successfully booted in 0.127915s" Mar 7 01:09:13.835729 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:09:13.859617 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:09:13.869541 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:09:13.872771 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:09:13.874268 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:09:14.082699 tar[1968]: linux-amd64/README.md Mar 7 01:09:14.094098 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:09:14.563159 systemd-networkd[1894]: eth0: Gained IPv6LL Mar 7 01:09:14.566320 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:09:14.569475 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:09:14.578334 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 01:09:14.586028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:14.591297 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:09:14.638808 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:09:14.648652 amazon-ssm-agent[2177]: Initializing new seelog logger Mar 7 01:09:14.649111 amazon-ssm-agent[2177]: New Seelog Logger Creation Complete Mar 7 01:09:14.649181 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.649181 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.649734 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 processing appconfig overrides Mar 7 01:09:14.650172 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.650172 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.650266 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 processing appconfig overrides Mar 7 01:09:14.650562 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.650562 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.650645 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 processing appconfig overrides Mar 7 01:09:14.651138 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO Proxy environment variables: Mar 7 01:09:14.655165 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.655165 amazon-ssm-agent[2177]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:09:14.655331 amazon-ssm-agent[2177]: 2026/03/07 01:09:14 processing appconfig overrides Mar 7 01:09:14.664730 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:09:14.678123 systemd[1]: Started sshd@0-172.31.31.131:22-68.220.241.50:49726.service - OpenSSH per-connection server daemon (68.220.241.50:49726). Mar 7 01:09:14.752911 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO https_proxy: Mar 7 01:09:14.849844 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO http_proxy: Mar 7 01:09:14.947836 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO no_proxy: Mar 7 01:09:15.017539 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO Checking if agent identity type OnPrem can be assumed Mar 7 01:09:15.017539 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO Checking if agent identity type EC2 can be assumed Mar 7 01:09:15.017539 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO Agent will take identity from EC2 Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [Registrar] Starting registrar module Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [EC2Identity] EC2 registration was successful. Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [CredentialRefresher] credentialRefresher has started Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:14 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 01:09:15.017831 amazon-ssm-agent[2177]: 2026-03-07 01:09:15 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 01:09:15.046079 amazon-ssm-agent[2177]: 2026-03-07 01:09:15 INFO [CredentialRefresher] Next credential rotation will be in 32.15832280603333 minutes Mar 7 01:09:15.192132 sshd[2195]: Accepted publickey for core from 68.220.241.50 port 49726 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:15.195190 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:15.206187 systemd-logind[1964]: New session 1 of user core. Mar 7 01:09:15.207835 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:09:15.214312 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:09:15.230069 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:09:15.238495 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:09:15.242616 (systemd)[2200]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:09:15.369714 systemd[2200]: Queued start job for default target default.target. Mar 7 01:09:15.375713 systemd[2200]: Created slice app.slice - User Application Slice. Mar 7 01:09:15.375760 systemd[2200]: Reached target paths.target - Paths. Mar 7 01:09:15.375782 systemd[2200]: Reached target timers.target - Timers. Mar 7 01:09:15.378089 systemd[2200]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:09:15.392366 systemd[2200]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:09:15.392515 systemd[2200]: Reached target sockets.target - Sockets. Mar 7 01:09:15.392535 systemd[2200]: Reached target basic.target - Basic System. Mar 7 01:09:15.392588 systemd[2200]: Reached target default.target - Main User Target. Mar 7 01:09:15.392626 systemd[2200]: Startup finished in 142ms. Mar 7 01:09:15.392866 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:09:15.400133 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:09:15.764268 systemd[1]: Started sshd@1-172.31.31.131:22-68.220.241.50:49734.service - OpenSSH per-connection server daemon (68.220.241.50:49734). Mar 7 01:09:16.034980 amazon-ssm-agent[2177]: 2026-03-07 01:09:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 01:09:16.135968 amazon-ssm-agent[2177]: 2026-03-07 01:09:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2214) started Mar 7 01:09:16.235820 amazon-ssm-agent[2177]: 2026-03-07 01:09:16 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 01:09:16.253918 sshd[2211]: Accepted publickey for core from 68.220.241.50 port 49734 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:16.254637 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:16.259572 systemd-logind[1964]: New session 2 of user core. Mar 7 01:09:16.270204 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:09:16.608422 sshd[2211]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:16.613753 systemd[1]: sshd@1-172.31.31.131:22-68.220.241.50:49734.service: Deactivated successfully. Mar 7 01:09:16.616327 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:09:16.617498 systemd-logind[1964]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:09:16.618949 systemd-logind[1964]: Removed session 2. Mar 7 01:09:16.703312 systemd[1]: Started sshd@2-172.31.31.131:22-68.220.241.50:49738.service - OpenSSH per-connection server daemon (68.220.241.50:49738). Mar 7 01:09:16.797642 ntpd[1957]: Listen normally on 7 eth0 [fe80::465:43ff:fe4e:fd7d%2]:123 Mar 7 01:09:16.798116 ntpd[1957]: 7 Mar 01:09:16 ntpd[1957]: Listen normally on 7 eth0 [fe80::465:43ff:fe4e:fd7d%2]:123 Mar 7 01:09:17.193643 sshd[2229]: Accepted publickey for core from 68.220.241.50 port 49738 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:17.195737 sshd[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:17.203760 systemd-logind[1964]: New session 3 of user core. Mar 7 01:09:17.208153 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:09:17.549507 sshd[2229]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:17.553507 systemd[1]: sshd@2-172.31.31.131:22-68.220.241.50:49738.service: Deactivated successfully. Mar 7 01:09:17.555561 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:09:17.556980 systemd-logind[1964]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:09:17.558354 systemd-logind[1964]: Removed session 3. Mar 7 01:09:20.985790 systemd-resolved[1898]: Clock change detected. Flushing caches. Mar 7 01:09:21.478103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:21.479153 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:09:21.479891 systemd[1]: Startup finished in 672ms (kernel) + 9.876s (initrd) + 11.749s (userspace) = 22.299s. Mar 7 01:09:21.488023 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:09:22.823349 kubelet[2244]: E0307 01:09:22.823284 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:09:22.826107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:09:22.826306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:09:22.826701 systemd[1]: kubelet.service: Consumed 1.013s CPU time. Mar 7 01:09:28.825322 systemd[1]: Started sshd@3-172.31.31.131:22-68.220.241.50:41056.service - OpenSSH per-connection server daemon (68.220.241.50:41056). Mar 7 01:09:29.327229 sshd[2252]: Accepted publickey for core from 68.220.241.50 port 41056 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:29.329074 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:29.334682 systemd-logind[1964]: New session 4 of user core. Mar 7 01:09:29.339851 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:09:29.683628 sshd[2252]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:29.687157 systemd[1]: sshd@3-172.31.31.131:22-68.220.241.50:41056.service: Deactivated successfully. Mar 7 01:09:29.689180 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:09:29.691038 systemd-logind[1964]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:09:29.692290 systemd-logind[1964]: Removed session 4. Mar 7 01:09:29.769241 systemd[1]: Started sshd@4-172.31.31.131:22-68.220.241.50:41072.service - OpenSSH per-connection server daemon (68.220.241.50:41072). Mar 7 01:09:30.253232 sshd[2259]: Accepted publickey for core from 68.220.241.50 port 41072 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:30.255180 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:30.260159 systemd-logind[1964]: New session 5 of user core. Mar 7 01:09:30.266819 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:09:30.602634 sshd[2259]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:30.607339 systemd-logind[1964]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:09:30.608671 systemd[1]: sshd@4-172.31.31.131:22-68.220.241.50:41072.service: Deactivated successfully. Mar 7 01:09:30.610713 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:09:30.611817 systemd-logind[1964]: Removed session 5. Mar 7 01:09:30.693969 systemd[1]: Started sshd@5-172.31.31.131:22-68.220.241.50:41074.service - OpenSSH per-connection server daemon (68.220.241.50:41074). Mar 7 01:09:31.183830 sshd[2266]: Accepted publickey for core from 68.220.241.50 port 41074 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:31.184472 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:31.190210 systemd-logind[1964]: New session 6 of user core. Mar 7 01:09:31.195779 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:09:31.539034 sshd[2266]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:31.543368 systemd-logind[1964]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:09:31.544412 systemd[1]: sshd@5-172.31.31.131:22-68.220.241.50:41074.service: Deactivated successfully. Mar 7 01:09:31.546450 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:09:31.547503 systemd-logind[1964]: Removed session 6. Mar 7 01:09:31.629015 systemd[1]: Started sshd@6-172.31.31.131:22-68.220.241.50:41090.service - OpenSSH per-connection server daemon (68.220.241.50:41090). Mar 7 01:09:32.114608 sshd[2273]: Accepted publickey for core from 68.220.241.50 port 41090 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:32.116049 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:32.122712 systemd-logind[1964]: New session 7 of user core. Mar 7 01:09:32.131843 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:09:32.421156 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:09:32.421591 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:32.438512 sudo[2276]: pam_unix(sudo:session): session closed for user root Mar 7 01:09:32.515994 sshd[2273]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:32.521201 systemd[1]: sshd@6-172.31.31.131:22-68.220.241.50:41090.service: Deactivated successfully. Mar 7 01:09:32.521622 systemd-logind[1964]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:09:32.523676 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:09:32.525004 systemd-logind[1964]: Removed session 7. Mar 7 01:09:32.605005 systemd[1]: Started sshd@7-172.31.31.131:22-68.220.241.50:51464.service - OpenSSH per-connection server daemon (68.220.241.50:51464). Mar 7 01:09:32.998812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:09:33.005875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:33.086585 sshd[2281]: Accepted publickey for core from 68.220.241.50 port 51464 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:33.087843 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:33.094036 systemd-logind[1964]: New session 8 of user core. Mar 7 01:09:33.103816 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:09:33.361373 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:09:33.362269 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:33.367352 sudo[2288]: pam_unix(sudo:session): session closed for user root Mar 7 01:09:33.373800 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:09:33.374221 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:33.391007 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:09:33.393462 auditctl[2291]: No rules Mar 7 01:09:33.393925 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:09:33.394170 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:09:33.398093 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:09:33.441266 augenrules[2309]: No rules Mar 7 01:09:33.442771 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:09:33.444278 sudo[2287]: pam_unix(sudo:session): session closed for user root Mar 7 01:09:33.523415 sshd[2281]: pam_unix(sshd:session): session closed for user core Mar 7 01:09:33.528596 systemd[1]: sshd@7-172.31.31.131:22-68.220.241.50:51464.service: Deactivated successfully. Mar 7 01:09:33.531950 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:09:33.533252 systemd-logind[1964]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:09:33.535114 systemd-logind[1964]: Removed session 8. Mar 7 01:09:33.614934 systemd[1]: Started sshd@8-172.31.31.131:22-68.220.241.50:51476.service - OpenSSH per-connection server daemon (68.220.241.50:51476). Mar 7 01:09:34.103223 sshd[2317]: Accepted publickey for core from 68.220.241.50 port 51476 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:09:34.105152 sshd[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:09:34.110610 systemd-logind[1964]: New session 9 of user core. Mar 7 01:09:34.115825 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:09:34.380355 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:09:34.380778 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:09:35.950349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:35.962100 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:09:36.247063 kubelet[2334]: E0307 01:09:36.246923 2334 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:09:36.250686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:09:36.250882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:09:36.574105 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:09:36.574226 (dockerd)[2349]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:09:37.227392 dockerd[2349]: time="2026-03-07T01:09:37.227324616Z" level=info msg="Starting up" Mar 7 01:09:37.470879 dockerd[2349]: time="2026-03-07T01:09:37.470806231Z" level=info msg="Loading containers: start." Mar 7 01:09:37.629769 kernel: Initializing XFRM netlink socket Mar 7 01:09:37.678176 (udev-worker)[2371]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:09:37.742890 systemd-networkd[1894]: docker0: Link UP Mar 7 01:09:37.758912 dockerd[2349]: time="2026-03-07T01:09:37.758856874Z" level=info msg="Loading containers: done." Mar 7 01:09:37.786927 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3605853931-merged.mount: Deactivated successfully. Mar 7 01:09:37.791047 dockerd[2349]: time="2026-03-07T01:09:37.790976276Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:09:37.791238 dockerd[2349]: time="2026-03-07T01:09:37.791149829Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:09:37.791343 dockerd[2349]: time="2026-03-07T01:09:37.791322792Z" level=info msg="Daemon has completed initialization" Mar 7 01:09:37.827133 dockerd[2349]: time="2026-03-07T01:09:37.826597377Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:09:37.826706 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:09:39.312317 containerd[1984]: time="2026-03-07T01:09:39.312275228Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 01:09:39.835282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974541439.mount: Deactivated successfully. Mar 7 01:09:41.276503 containerd[1984]: time="2026-03-07T01:09:41.276443016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:41.278001 containerd[1984]: time="2026-03-07T01:09:41.277949704Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 01:09:41.279879 containerd[1984]: time="2026-03-07T01:09:41.279350903Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:41.282456 containerd[1984]: time="2026-03-07T01:09:41.282416431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:41.283795 containerd[1984]: time="2026-03-07T01:09:41.283756606Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 1.971433597s" Mar 7 01:09:41.283876 containerd[1984]: time="2026-03-07T01:09:41.283804690Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 01:09:41.284563 containerd[1984]: time="2026-03-07T01:09:41.284521204Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 01:09:42.879857 containerd[1984]: time="2026-03-07T01:09:42.879799214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:42.881584 containerd[1984]: time="2026-03-07T01:09:42.881413774Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 01:09:42.883502 containerd[1984]: time="2026-03-07T01:09:42.882958624Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:42.886785 containerd[1984]: time="2026-03-07T01:09:42.886729267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:42.888192 containerd[1984]: time="2026-03-07T01:09:42.888145222Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 1.603477797s" Mar 7 01:09:42.888292 containerd[1984]: time="2026-03-07T01:09:42.888197481Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 01:09:42.889124 containerd[1984]: time="2026-03-07T01:09:42.889088929Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 01:09:43.999449 containerd[1984]: time="2026-03-07T01:09:43.999390699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.000755 containerd[1984]: time="2026-03-07T01:09:44.000678739Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 01:09:44.006353 containerd[1984]: time="2026-03-07T01:09:44.006293721Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.015590 containerd[1984]: time="2026-03-07T01:09:44.013457142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:44.017942 containerd[1984]: time="2026-03-07T01:09:44.017887697Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.128753434s" Mar 7 01:09:44.017942 containerd[1984]: time="2026-03-07T01:09:44.017946101Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 01:09:44.018708 containerd[1984]: time="2026-03-07T01:09:44.018537279Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 01:09:44.595605 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:09:45.228148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162495098.mount: Deactivated successfully. Mar 7 01:09:45.711734 containerd[1984]: time="2026-03-07T01:09:45.711667709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:45.713384 containerd[1984]: time="2026-03-07T01:09:45.713313215Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 01:09:45.714917 containerd[1984]: time="2026-03-07T01:09:45.714854697Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:45.717650 containerd[1984]: time="2026-03-07T01:09:45.717584149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:45.719369 containerd[1984]: time="2026-03-07T01:09:45.718366580Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 1.699762648s" Mar 7 01:09:45.719369 containerd[1984]: time="2026-03-07T01:09:45.718410989Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 01:09:45.719889 containerd[1984]: time="2026-03-07T01:09:45.719865388Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 01:09:46.261682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2397787697.mount: Deactivated successfully. Mar 7 01:09:46.263208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:09:46.269999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:46.554785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:46.568083 (kubelet)[2583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:09:46.652600 kubelet[2583]: E0307 01:09:46.651436 2583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:09:46.654272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:09:46.654506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:09:47.668303 containerd[1984]: time="2026-03-07T01:09:47.668243364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:47.670424 containerd[1984]: time="2026-03-07T01:09:47.670146447Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 01:09:47.672584 containerd[1984]: time="2026-03-07T01:09:47.672482757Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:47.677561 containerd[1984]: time="2026-03-07T01:09:47.677045067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:47.678423 containerd[1984]: time="2026-03-07T01:09:47.678381049Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.958373046s" Mar 7 01:09:47.678521 containerd[1984]: time="2026-03-07T01:09:47.678429782Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 01:09:47.679159 containerd[1984]: time="2026-03-07T01:09:47.679130541Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:09:48.185141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640998579.mount: Deactivated successfully. Mar 7 01:09:48.198269 containerd[1984]: time="2026-03-07T01:09:48.198205095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:48.200375 containerd[1984]: time="2026-03-07T01:09:48.200155789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:09:48.202523 containerd[1984]: time="2026-03-07T01:09:48.202449834Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:48.206139 containerd[1984]: time="2026-03-07T01:09:48.206075423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:48.207118 containerd[1984]: time="2026-03-07T01:09:48.207079223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 527.90807ms" Mar 7 01:09:48.207243 containerd[1984]: time="2026-03-07T01:09:48.207123584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:09:48.208221 containerd[1984]: time="2026-03-07T01:09:48.208196551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 01:09:48.793912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113520359.mount: Deactivated successfully. Mar 7 01:09:50.028255 containerd[1984]: time="2026-03-07T01:09:50.028188453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:50.030261 containerd[1984]: time="2026-03-07T01:09:50.030176921Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 01:09:50.034209 containerd[1984]: time="2026-03-07T01:09:50.034131159Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:50.039875 containerd[1984]: time="2026-03-07T01:09:50.039805243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:09:50.041406 containerd[1984]: time="2026-03-07T01:09:50.041354193Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.833125318s" Mar 7 01:09:50.041406 containerd[1984]: time="2026-03-07T01:09:50.041402557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 01:09:51.673718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:51.679921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:51.725369 systemd[1]: Reloading requested from client PID 2725 ('systemctl') (unit session-9.scope)... Mar 7 01:09:51.725394 systemd[1]: Reloading... Mar 7 01:09:51.834585 zram_generator::config[2765]: No configuration found. Mar 7 01:09:52.007728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:09:52.093325 systemd[1]: Reloading finished in 367 ms. Mar 7 01:09:52.153710 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:09:52.153993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:52.157472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:09:52.371389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:09:52.384161 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:09:52.440590 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:09:52.777910 kubelet[2829]: I0307 01:09:52.777846 2829 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:09:52.777910 kubelet[2829]: I0307 01:09:52.777902 2829 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:09:52.777910 kubelet[2829]: I0307 01:09:52.777924 2829 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:09:52.778137 kubelet[2829]: I0307 01:09:52.777931 2829 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:09:52.778317 kubelet[2829]: I0307 01:09:52.778293 2829 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:09:52.787912 kubelet[2829]: I0307 01:09:52.787725 2829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:09:52.791586 kubelet[2829]: E0307 01:09:52.791179 2829 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:09:52.803794 kubelet[2829]: E0307 01:09:52.803254 2829 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:09:52.803794 kubelet[2829]: I0307 01:09:52.803592 2829 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:09:52.811974 kubelet[2829]: I0307 01:09:52.811930 2829 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:09:52.816228 kubelet[2829]: I0307 01:09:52.816165 2829 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:09:52.816461 kubelet[2829]: I0307 01:09:52.816224 2829 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-131","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:09:52.816621 kubelet[2829]: I0307 01:09:52.816466 2829 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:09:52.816621 kubelet[2829]: I0307 01:09:52.816481 2829 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:09:52.816700 kubelet[2829]: I0307 01:09:52.816676 2829 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:09:52.818883 kubelet[2829]: I0307 01:09:52.818851 2829 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:09:52.819121 kubelet[2829]: I0307 01:09:52.819095 2829 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:09:52.819192 kubelet[2829]: I0307 01:09:52.819128 2829 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:09:52.819192 kubelet[2829]: I0307 01:09:52.819162 2829 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:09:52.819192 kubelet[2829]: I0307 01:09:52.819177 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:09:52.823733 kubelet[2829]: I0307 01:09:52.823707 2829 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:09:52.827079 kubelet[2829]: I0307 01:09:52.826900 2829 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:09:52.827079 kubelet[2829]: I0307 01:09:52.826951 2829 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:09:52.829411 kubelet[2829]: W0307 01:09:52.828600 2829 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:09:52.832681 kubelet[2829]: I0307 01:09:52.832649 2829 server.go:1257] "Started kubelet" Mar 7 01:09:52.838333 kubelet[2829]: I0307 01:09:52.838284 2829 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:09:52.840210 kubelet[2829]: I0307 01:09:52.839366 2829 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:09:52.840210 kubelet[2829]: I0307 01:09:52.839661 2829 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:09:52.840210 kubelet[2829]: I0307 01:09:52.839729 2829 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:09:52.840210 kubelet[2829]: I0307 01:09:52.840077 2829 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:09:52.843982 kubelet[2829]: E0307 01:09:52.841760 2829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.131:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.131:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-131.189a69db0b559275 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-131,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-131,},FirstTimestamp:2026-03-07 01:09:52.832623221 +0000 UTC m=+0.443528213,LastTimestamp:2026-03-07 01:09:52.832623221 +0000 UTC m=+0.443528213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-131,}" Mar 7 01:09:52.845240 kubelet[2829]: I0307 01:09:52.845205 2829 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:09:52.851631 kubelet[2829]: E0307 01:09:52.851527 2829 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:09:52.851845 kubelet[2829]: I0307 01:09:52.851801 2829 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:09:52.855780 kubelet[2829]: E0307 01:09:52.855620 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:52.855780 kubelet[2829]: I0307 01:09:52.855669 2829 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:09:52.856283 kubelet[2829]: E0307 01:09:52.856257 2829 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": dial tcp 172.31.31.131:6443: connect: connection refused" interval="200ms" Mar 7 01:09:52.856502 kubelet[2829]: I0307 01:09:52.856477 2829 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:09:52.856830 kubelet[2829]: I0307 01:09:52.856535 2829 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:09:52.857382 kubelet[2829]: I0307 01:09:52.857349 2829 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:09:52.857520 kubelet[2829]: I0307 01:09:52.857455 2829 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:09:52.860097 kubelet[2829]: I0307 01:09:52.860052 2829 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:09:52.896664 kubelet[2829]: I0307 01:09:52.896603 2829 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:09:52.898925 kubelet[2829]: I0307 01:09:52.898666 2829 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:09:52.898925 kubelet[2829]: I0307 01:09:52.898682 2829 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:09:52.898925 kubelet[2829]: I0307 01:09:52.898703 2829 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:09:52.899625 kubelet[2829]: I0307 01:09:52.899469 2829 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:09:52.899625 kubelet[2829]: I0307 01:09:52.899596 2829 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:09:52.899748 kubelet[2829]: I0307 01:09:52.899643 2829 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:09:52.899748 kubelet[2829]: E0307 01:09:52.899693 2829 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:09:52.902486 kubelet[2829]: I0307 01:09:52.902391 2829 policy_none.go:50] "Start" Mar 7 01:09:52.902486 kubelet[2829]: I0307 01:09:52.902431 2829 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:09:52.902486 kubelet[2829]: I0307 01:09:52.902445 2829 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:09:52.908366 kubelet[2829]: I0307 01:09:52.907325 2829 policy_none.go:44] "Start" Mar 7 01:09:52.914363 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:09:52.932608 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:09:52.936298 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:09:52.948485 kubelet[2829]: E0307 01:09:52.947867 2829 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:09:52.948485 kubelet[2829]: I0307 01:09:52.948106 2829 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:09:52.948485 kubelet[2829]: I0307 01:09:52.948120 2829 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:09:52.948485 kubelet[2829]: I0307 01:09:52.948398 2829 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:09:52.951727 kubelet[2829]: E0307 01:09:52.951699 2829 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:09:52.951829 kubelet[2829]: E0307 01:09:52.951750 2829 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-131\" not found" Mar 7 01:09:53.017999 systemd[1]: Created slice kubepods-burstable-pod1d49cf98a233e58e0543482cb66f2122.slice - libcontainer container kubepods-burstable-pod1d49cf98a233e58e0543482cb66f2122.slice. Mar 7 01:09:53.037901 kubelet[2829]: E0307 01:09:53.037595 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:53.041697 systemd[1]: Created slice kubepods-burstable-pod4b0382842ecbdf6099cc92193662e93d.slice - libcontainer container kubepods-burstable-pod4b0382842ecbdf6099cc92193662e93d.slice. Mar 7 01:09:53.053753 kubelet[2829]: I0307 01:09:53.053700 2829 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-131" Mar 7 01:09:53.054202 kubelet[2829]: E0307 01:09:53.054110 2829 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.131:6443/api/v1/nodes\": dial tcp 172.31.31.131:6443: connect: connection refused" node="ip-172-31-31-131" Mar 7 01:09:53.054738 kubelet[2829]: E0307 01:09:53.054380 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:53.057220 kubelet[2829]: E0307 01:09:53.057177 2829 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": dial tcp 172.31.31.131:6443: connect: connection refused" interval="400ms" Mar 7 01:09:53.057773 kubelet[2829]: I0307 01:09:53.057397 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d49cf98a233e58e0543482cb66f2122-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-131\" (UID: \"1d49cf98a233e58e0543482cb66f2122\") " pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:09:53.057773 kubelet[2829]: I0307 01:09:53.057432 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:53.057773 kubelet[2829]: I0307 01:09:53.057458 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:53.057773 kubelet[2829]: I0307 01:09:53.057494 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:53.057773 kubelet[2829]: I0307 01:09:53.057515 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1301b00a11382e9a9dfdbe2bbbed9fd9-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-131\" (UID: \"1301b00a11382e9a9dfdbe2bbbed9fd9\") " pod="kube-system/kube-scheduler-ip-172-31-31-131" Mar 7 01:09:53.058074 kubelet[2829]: I0307 01:09:53.057538 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d49cf98a233e58e0543482cb66f2122-ca-certs\") pod \"kube-apiserver-ip-172-31-31-131\" (UID: \"1d49cf98a233e58e0543482cb66f2122\") " pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:09:53.058074 kubelet[2829]: I0307 01:09:53.057580 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d49cf98a233e58e0543482cb66f2122-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-131\" (UID: \"1d49cf98a233e58e0543482cb66f2122\") " pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:09:53.058074 kubelet[2829]: I0307 01:09:53.057605 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:53.058074 kubelet[2829]: I0307 01:09:53.057628 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:53.059608 systemd[1]: Created slice kubepods-burstable-pod1301b00a11382e9a9dfdbe2bbbed9fd9.slice - libcontainer container kubepods-burstable-pod1301b00a11382e9a9dfdbe2bbbed9fd9.slice. Mar 7 01:09:53.061799 kubelet[2829]: E0307 01:09:53.061773 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:53.257097 kubelet[2829]: I0307 01:09:53.257062 2829 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-131" Mar 7 01:09:53.257492 kubelet[2829]: E0307 01:09:53.257428 2829 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.131:6443/api/v1/nodes\": dial tcp 172.31.31.131:6443: connect: connection refused" node="ip-172-31-31-131" Mar 7 01:09:53.343479 containerd[1984]: time="2026-03-07T01:09:53.343334653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-131,Uid:1d49cf98a233e58e0543482cb66f2122,Namespace:kube-system,Attempt:0,}" Mar 7 01:09:53.368173 containerd[1984]: time="2026-03-07T01:09:53.368120668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-131,Uid:1301b00a11382e9a9dfdbe2bbbed9fd9,Namespace:kube-system,Attempt:0,}" Mar 7 01:09:53.368569 containerd[1984]: time="2026-03-07T01:09:53.368120865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-131,Uid:4b0382842ecbdf6099cc92193662e93d,Namespace:kube-system,Attempt:0,}" Mar 7 01:09:53.458436 kubelet[2829]: E0307 01:09:53.458350 2829 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": dial tcp 172.31.31.131:6443: connect: connection refused" interval="800ms" Mar 7 01:09:53.659600 kubelet[2829]: I0307 01:09:53.659231 2829 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-131" Mar 7 01:09:53.659805 kubelet[2829]: E0307 01:09:53.659747 2829 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.131:6443/api/v1/nodes\": dial tcp 172.31.31.131:6443: connect: connection refused" node="ip-172-31-31-131" Mar 7 01:09:53.867044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490237554.mount: Deactivated successfully. Mar 7 01:09:53.881967 containerd[1984]: time="2026-03-07T01:09:53.881912646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:53.883993 containerd[1984]: time="2026-03-07T01:09:53.883935543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:09:53.886046 containerd[1984]: time="2026-03-07T01:09:53.886004037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:53.888016 containerd[1984]: time="2026-03-07T01:09:53.887977085Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:53.890083 containerd[1984]: time="2026-03-07T01:09:53.890035543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:09:53.892218 containerd[1984]: time="2026-03-07T01:09:53.892171753Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:53.893914 containerd[1984]: time="2026-03-07T01:09:53.893855446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:09:53.897406 containerd[1984]: time="2026-03-07T01:09:53.897347205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:09:53.898483 containerd[1984]: time="2026-03-07T01:09:53.898249058Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.033284ms" Mar 7 01:09:53.902356 containerd[1984]: time="2026-03-07T01:09:53.902308035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.032346ms" Mar 7 01:09:53.903096 containerd[1984]: time="2026-03-07T01:09:53.903055940Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.616386ms" Mar 7 01:09:54.235271 containerd[1984]: time="2026-03-07T01:09:54.234868411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:09:54.235271 containerd[1984]: time="2026-03-07T01:09:54.234935201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:09:54.235271 containerd[1984]: time="2026-03-07T01:09:54.234975702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:54.235271 containerd[1984]: time="2026-03-07T01:09:54.235078374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:54.236563 containerd[1984]: time="2026-03-07T01:09:54.236088381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:09:54.236563 containerd[1984]: time="2026-03-07T01:09:54.236143421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:09:54.236563 containerd[1984]: time="2026-03-07T01:09:54.236182708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:54.240384 containerd[1984]: time="2026-03-07T01:09:54.240110129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:54.245571 containerd[1984]: time="2026-03-07T01:09:54.244197807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:09:54.245571 containerd[1984]: time="2026-03-07T01:09:54.244256691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:09:54.245571 containerd[1984]: time="2026-03-07T01:09:54.244291782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:54.245571 containerd[1984]: time="2026-03-07T01:09:54.244386095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:09:54.261692 kubelet[2829]: E0307 01:09:54.261630 2829 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": dial tcp 172.31.31.131:6443: connect: connection refused" interval="1.6s" Mar 7 01:09:54.276218 systemd[1]: Started cri-containerd-9a343db0525d449431a29bd6a1aeb40ff5e494fbff617c76dccbcc670b6c0755.scope - libcontainer container 9a343db0525d449431a29bd6a1aeb40ff5e494fbff617c76dccbcc670b6c0755. Mar 7 01:09:54.293802 systemd[1]: Started cri-containerd-9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b.scope - libcontainer container 9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b. Mar 7 01:09:54.307326 systemd[1]: Started cri-containerd-5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72.scope - libcontainer container 5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72. Mar 7 01:09:54.378724 containerd[1984]: time="2026-03-07T01:09:54.378312470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-131,Uid:1d49cf98a233e58e0543482cb66f2122,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a343db0525d449431a29bd6a1aeb40ff5e494fbff617c76dccbcc670b6c0755\"" Mar 7 01:09:54.413578 containerd[1984]: time="2026-03-07T01:09:54.412792180Z" level=info msg="CreateContainer within sandbox \"9a343db0525d449431a29bd6a1aeb40ff5e494fbff617c76dccbcc670b6c0755\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:09:54.421176 containerd[1984]: time="2026-03-07T01:09:54.421132318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-131,Uid:4b0382842ecbdf6099cc92193662e93d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72\"" Mar 7 01:09:54.436806 containerd[1984]: time="2026-03-07T01:09:54.436709431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-131,Uid:1301b00a11382e9a9dfdbe2bbbed9fd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b\"" Mar 7 01:09:54.439799 containerd[1984]: time="2026-03-07T01:09:54.439760547Z" level=info msg="CreateContainer within sandbox \"5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:09:54.455903 containerd[1984]: time="2026-03-07T01:09:54.455854218Z" level=info msg="CreateContainer within sandbox \"9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:09:54.461691 kubelet[2829]: I0307 01:09:54.461663 2829 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-131" Mar 7 01:09:54.462271 kubelet[2829]: E0307 01:09:54.462077 2829 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.31.131:6443/api/v1/nodes\": dial tcp 172.31.31.131:6443: connect: connection refused" node="ip-172-31-31-131" Mar 7 01:09:54.502489 containerd[1984]: time="2026-03-07T01:09:54.502350768Z" level=info msg="CreateContainer within sandbox \"9a343db0525d449431a29bd6a1aeb40ff5e494fbff617c76dccbcc670b6c0755\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e615b68f0a0be39962d7e47e013df61b6ee11661bb68cb6605899aa53d904d6\"" Mar 7 01:09:54.504519 containerd[1984]: time="2026-03-07T01:09:54.503733051Z" level=info msg="StartContainer for \"8e615b68f0a0be39962d7e47e013df61b6ee11661bb68cb6605899aa53d904d6\"" Mar 7 01:09:54.510365 containerd[1984]: time="2026-03-07T01:09:54.510316375Z" level=info msg="CreateContainer within sandbox \"5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354\"" Mar 7 01:09:54.511450 containerd[1984]: time="2026-03-07T01:09:54.511169007Z" level=info msg="CreateContainer within sandbox \"9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9\"" Mar 7 01:09:54.512035 containerd[1984]: time="2026-03-07T01:09:54.511915838Z" level=info msg="StartContainer for \"afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9\"" Mar 7 01:09:54.513833 containerd[1984]: time="2026-03-07T01:09:54.513804149Z" level=info msg="StartContainer for \"d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354\"" Mar 7 01:09:54.567850 systemd[1]: Started cri-containerd-8e615b68f0a0be39962d7e47e013df61b6ee11661bb68cb6605899aa53d904d6.scope - libcontainer container 8e615b68f0a0be39962d7e47e013df61b6ee11661bb68cb6605899aa53d904d6. Mar 7 01:09:54.579294 systemd[1]: Started cri-containerd-afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9.scope - libcontainer container afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9. Mar 7 01:09:54.583092 systemd[1]: Started cri-containerd-d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354.scope - libcontainer container d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354. Mar 7 01:09:54.661895 containerd[1984]: time="2026-03-07T01:09:54.661707358Z" level=info msg="StartContainer for \"8e615b68f0a0be39962d7e47e013df61b6ee11661bb68cb6605899aa53d904d6\" returns successfully" Mar 7 01:09:54.679311 containerd[1984]: time="2026-03-07T01:09:54.679256278Z" level=info msg="StartContainer for \"d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354\" returns successfully" Mar 7 01:09:54.712902 containerd[1984]: time="2026-03-07T01:09:54.712748513Z" level=info msg="StartContainer for \"afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9\" returns successfully" Mar 7 01:09:54.921538 kubelet[2829]: E0307 01:09:54.921146 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:54.924486 kubelet[2829]: E0307 01:09:54.924456 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:54.945051 kubelet[2829]: E0307 01:09:54.944877 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:54.966606 kubelet[2829]: E0307 01:09:54.964965 2829 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.31.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:09:55.862836 kubelet[2829]: E0307 01:09:55.862784 2829 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": dial tcp 172.31.31.131:6443: connect: connection refused" interval="3.2s" Mar 7 01:09:55.938889 kubelet[2829]: E0307 01:09:55.938518 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:55.939496 kubelet[2829]: E0307 01:09:55.939480 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:56.064318 kubelet[2829]: I0307 01:09:56.064279 2829 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-131" Mar 7 01:09:58.387308 kubelet[2829]: I0307 01:09:58.387074 2829 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-31-131" Mar 7 01:09:58.387308 kubelet[2829]: E0307 01:09:58.387125 2829 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-131\": node \"ip-172-31-31-131\" not found" Mar 7 01:09:58.403253 kubelet[2829]: E0307 01:09:58.403195 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:58.504221 kubelet[2829]: E0307 01:09:58.504159 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:58.605190 kubelet[2829]: E0307 01:09:58.605128 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:58.706399 kubelet[2829]: E0307 01:09:58.706260 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:58.808199 kubelet[2829]: E0307 01:09:58.807210 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:58.882375 kubelet[2829]: E0307 01:09:58.881912 2829 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-131\" not found" node="ip-172-31-31-131" Mar 7 01:09:58.908798 kubelet[2829]: E0307 01:09:58.908665 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:59.009420 kubelet[2829]: E0307 01:09:59.009355 2829 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:09:59.158583 kubelet[2829]: I0307 01:09:59.156846 2829 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:09:59.180408 kubelet[2829]: I0307 01:09:59.180376 2829 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:59.185469 update_engine[1965]: I20260307 01:09:59.184590 1965 update_attempter.cc:509] Updating boot flags... Mar 7 01:09:59.190011 kubelet[2829]: I0307 01:09:59.189790 2829 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-131" Mar 7 01:09:59.302588 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3132) Mar 7 01:09:59.767537 kubelet[2829]: I0307 01:09:59.767490 2829 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:59.774619 kubelet[2829]: E0307 01:09:59.774575 2829 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-131\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:09:59.826518 kubelet[2829]: I0307 01:09:59.826469 2829 apiserver.go:52] "Watching apiserver" Mar 7 01:09:59.858026 kubelet[2829]: I0307 01:09:59.857979 2829 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:10:00.391031 systemd[1]: Reloading requested from client PID 3216 ('systemctl') (unit session-9.scope)... Mar 7 01:10:00.391055 systemd[1]: Reloading... Mar 7 01:10:00.559737 zram_generator::config[3259]: No configuration found. Mar 7 01:10:00.867744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:01.008772 systemd[1]: Reloading finished in 617 ms. Mar 7 01:10:01.077969 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:01.106509 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:10:01.109234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:01.140647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:02.014103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:02.059318 (kubelet)[3316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:10:02.365233 kubelet[3316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:10:02.446583 kubelet[3316]: I0307 01:10:02.444774 3316 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 01:10:02.446583 kubelet[3316]: I0307 01:10:02.444852 3316 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:10:02.446583 kubelet[3316]: I0307 01:10:02.444875 3316 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:10:02.446583 kubelet[3316]: I0307 01:10:02.444883 3316 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:10:02.446583 kubelet[3316]: I0307 01:10:02.445313 3316 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 01:10:02.463004 kubelet[3316]: I0307 01:10:02.462970 3316 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:10:02.473445 kubelet[3316]: I0307 01:10:02.473396 3316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:10:02.606802 kubelet[3316]: E0307 01:10:02.606707 3316 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:10:02.607421 kubelet[3316]: I0307 01:10:02.607027 3316 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:10:02.629457 kubelet[3316]: I0307 01:10:02.624724 3316 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:10:02.631639 kubelet[3316]: I0307 01:10:02.631574 3316 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:10:02.637144 kubelet[3316]: I0307 01:10:02.631918 3316 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-131","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:10:02.639183 kubelet[3316]: I0307 01:10:02.639043 3316 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 01:10:02.639562 kubelet[3316]: I0307 01:10:02.639421 3316 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 01:10:02.639881 kubelet[3316]: I0307 01:10:02.639728 3316 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:10:02.640496 sudo[3330]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:10:02.670011 sudo[3330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:10:02.690093 kubelet[3316]: I0307 01:10:02.690033 3316 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 01:10:02.720909 kubelet[3316]: I0307 01:10:02.720769 3316 kubelet.go:482] "Attempting to sync node with API server" Mar 7 01:10:02.720909 kubelet[3316]: I0307 01:10:02.720819 3316 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:10:02.720909 kubelet[3316]: I0307 01:10:02.720848 3316 kubelet.go:394] "Adding apiserver pod source" Mar 7 01:10:02.720909 kubelet[3316]: I0307 01:10:02.720863 3316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:10:02.746152 kubelet[3316]: I0307 01:10:02.745818 3316 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:10:02.764323 kubelet[3316]: I0307 01:10:02.764288 3316 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:10:02.770687 kubelet[3316]: I0307 01:10:02.770572 3316 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:10:02.816823 kubelet[3316]: I0307 01:10:02.815418 3316 server.go:1257] "Started kubelet" Mar 7 01:10:02.863747 kubelet[3316]: I0307 01:10:02.857322 3316 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 01:10:02.917676 kubelet[3316]: I0307 01:10:02.916816 3316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:10:02.933998 kubelet[3316]: I0307 01:10:02.864057 3316 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:10:02.988860 kubelet[3316]: I0307 01:10:02.988753 3316 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:10:02.989023 kubelet[3316]: I0307 01:10:02.988905 3316 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:10:02.989444 kubelet[3316]: I0307 01:10:02.989324 3316 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:10:02.989444 kubelet[3316]: I0307 01:10:02.989381 3316 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 01:10:03.013406 kubelet[3316]: I0307 01:10:03.005625 3316 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:10:03.013406 kubelet[3316]: E0307 01:10:02.953935 3316 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-31-131\" not found" Mar 7 01:10:03.016940 kubelet[3316]: I0307 01:10:03.016912 3316 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:10:03.063191 kubelet[3316]: I0307 01:10:03.059918 3316 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:10:03.063191 kubelet[3316]: I0307 01:10:03.060072 3316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:10:03.063191 kubelet[3316]: I0307 01:10:03.060423 3316 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:10:03.103987 kubelet[3316]: I0307 01:10:03.103959 3316 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:10:03.291994 kubelet[3316]: I0307 01:10:03.291795 3316 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:10:03.311908 kubelet[3316]: I0307 01:10:03.311626 3316 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:10:03.311908 kubelet[3316]: I0307 01:10:03.311657 3316 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 01:10:03.311908 kubelet[3316]: I0307 01:10:03.311685 3316 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 01:10:03.311908 kubelet[3316]: E0307 01:10:03.311745 3316 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:10:03.412395 kubelet[3316]: E0307 01:10:03.411828 3316 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:10:03.458323 kubelet[3316]: I0307 01:10:03.457775 3316 cpu_manager.go:225] "Starting" policy="none" Mar 7 01:10:03.458323 kubelet[3316]: I0307 01:10:03.457792 3316 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 01:10:03.458323 kubelet[3316]: I0307 01:10:03.457817 3316 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 01:10:03.459593 kubelet[3316]: I0307 01:10:03.458656 3316 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 01:10:03.473668 kubelet[3316]: I0307 01:10:03.459644 3316 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 01:10:03.473668 kubelet[3316]: I0307 01:10:03.459688 3316 policy_none.go:50] "Start" Mar 7 01:10:03.473668 kubelet[3316]: I0307 01:10:03.459702 3316 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:10:03.473668 kubelet[3316]: I0307 01:10:03.459724 3316 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:10:03.473668 kubelet[3316]: I0307 01:10:03.459891 3316 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:10:03.473668 kubelet[3316]: I0307 01:10:03.459901 3316 policy_none.go:44] "Start" Mar 7 01:10:03.498031 kubelet[3316]: E0307 01:10:03.497993 3316 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:10:03.498431 kubelet[3316]: I0307 01:10:03.498263 3316 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 01:10:03.498613 kubelet[3316]: I0307 01:10:03.498284 3316 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:10:03.500613 kubelet[3316]: I0307 01:10:03.499154 3316 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 01:10:03.504949 kubelet[3316]: E0307 01:10:03.504919 3316 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:10:03.616607 kubelet[3316]: I0307 01:10:03.616278 3316 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:10:03.618381 kubelet[3316]: I0307 01:10:03.618342 3316 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-131" Mar 7 01:10:03.619780 kubelet[3316]: I0307 01:10:03.619742 3316 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.623746 kubelet[3316]: I0307 01:10:03.623712 3316 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-31-131" Mar 7 01:10:03.643710 kubelet[3316]: E0307 01:10:03.643669 3316 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-131\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-131" Mar 7 01:10:03.645138 kubelet[3316]: E0307 01:10:03.644319 3316 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-131\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:10:03.646768 kubelet[3316]: E0307 01:10:03.646737 3316 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-131\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.656915 kubelet[3316]: I0307 01:10:03.655652 3316 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-31-131" Mar 7 01:10:03.656915 kubelet[3316]: I0307 01:10:03.655757 3316 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-31-131" Mar 7 01:10:03.741564 kubelet[3316]: I0307 01:10:03.740212 3316 apiserver.go:52] "Watching apiserver" Mar 7 01:10:03.762362 kubelet[3316]: I0307 01:10:03.762290 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d49cf98a233e58e0543482cb66f2122-ca-certs\") pod \"kube-apiserver-ip-172-31-31-131\" (UID: \"1d49cf98a233e58e0543482cb66f2122\") " pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:10:03.762362 kubelet[3316]: I0307 01:10:03.762356 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.762614 kubelet[3316]: I0307 01:10:03.762381 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.762614 kubelet[3316]: I0307 01:10:03.762423 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d49cf98a233e58e0543482cb66f2122-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-131\" (UID: \"1d49cf98a233e58e0543482cb66f2122\") " pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:10:03.762614 kubelet[3316]: I0307 01:10:03.762459 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d49cf98a233e58e0543482cb66f2122-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-131\" (UID: \"1d49cf98a233e58e0543482cb66f2122\") " pod="kube-system/kube-apiserver-ip-172-31-31-131" Mar 7 01:10:03.762614 kubelet[3316]: I0307 01:10:03.762486 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.762797 kubelet[3316]: I0307 01:10:03.762536 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.762797 kubelet[3316]: I0307 01:10:03.762765 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b0382842ecbdf6099cc92193662e93d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-131\" (UID: \"4b0382842ecbdf6099cc92193662e93d\") " pod="kube-system/kube-controller-manager-ip-172-31-31-131" Mar 7 01:10:03.762887 kubelet[3316]: I0307 01:10:03.762826 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1301b00a11382e9a9dfdbe2bbbed9fd9-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-131\" (UID: \"1301b00a11382e9a9dfdbe2bbbed9fd9\") " pod="kube-system/kube-scheduler-ip-172-31-31-131" Mar 7 01:10:03.806593 kubelet[3316]: I0307 01:10:03.806406 3316 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:10:03.991720 kubelet[3316]: I0307 01:10:03.991019 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-131" podStartSLOduration=4.990983939 podStartE2EDuration="4.990983939s" podCreationTimestamp="2026-03-07 01:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:03.980352993 +0000 UTC m=+1.880211247" watchObservedRunningTime="2026-03-07 01:10:03.990983939 +0000 UTC m=+1.890842202" Mar 7 01:10:04.010812 kubelet[3316]: I0307 01:10:04.010744 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-131" podStartSLOduration=5.010726665 podStartE2EDuration="5.010726665s" podCreationTimestamp="2026-03-07 01:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:03.991370289 +0000 UTC m=+1.891228553" watchObservedRunningTime="2026-03-07 01:10:04.010726665 +0000 UTC m=+1.910584928" Mar 7 01:10:04.032841 kubelet[3316]: I0307 01:10:04.032598 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-131" podStartSLOduration=5.03256491 podStartE2EDuration="5.03256491s" podCreationTimestamp="2026-03-07 01:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:04.011067047 +0000 UTC m=+1.910925309" watchObservedRunningTime="2026-03-07 01:10:04.03256491 +0000 UTC m=+1.932423171" Mar 7 01:10:04.583326 sudo[3330]: pam_unix(sudo:session): session closed for user root Mar 7 01:10:06.724882 sudo[2320]: pam_unix(sudo:session): session closed for user root Mar 7 01:10:06.804439 sshd[2317]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:06.809399 systemd-logind[1964]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:10:06.810154 systemd[1]: sshd@8-172.31.31.131:22-68.220.241.50:51476.service: Deactivated successfully. Mar 7 01:10:06.812661 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:10:06.813143 systemd[1]: session-9.scope: Consumed 4.173s CPU time, 148.6M memory peak, 0B memory swap peak. Mar 7 01:10:06.814514 systemd-logind[1964]: Removed session 9. Mar 7 01:10:07.219048 kubelet[3316]: I0307 01:10:07.218998 3316 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:10:07.219916 containerd[1984]: time="2026-03-07T01:10:07.219861213Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:10:07.220366 kubelet[3316]: I0307 01:10:07.220095 3316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:10:08.245368 systemd[1]: Created slice kubepods-besteffort-pod5b861300_3a22_4d6c_8cbb_13975eb3d4a9.slice - libcontainer container kubepods-besteffort-pod5b861300_3a22_4d6c_8cbb_13975eb3d4a9.slice. Mar 7 01:10:08.260515 systemd[1]: Created slice kubepods-burstable-pod92988e03_c98b_40f5_88ca_bebf8290ccdb.slice - libcontainer container kubepods-burstable-pod92988e03_c98b_40f5_88ca_bebf8290ccdb.slice. Mar 7 01:10:08.292448 kubelet[3316]: I0307 01:10:08.292387 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b861300-3a22-4d6c-8cbb-13975eb3d4a9-xtables-lock\") pod \"kube-proxy-kdsxp\" (UID: \"5b861300-3a22-4d6c-8cbb-13975eb3d4a9\") " pod="kube-system/kube-proxy-kdsxp" Mar 7 01:10:08.292448 kubelet[3316]: I0307 01:10:08.292446 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b861300-3a22-4d6c-8cbb-13975eb3d4a9-lib-modules\") pod \"kube-proxy-kdsxp\" (UID: \"5b861300-3a22-4d6c-8cbb-13975eb3d4a9\") " pod="kube-system/kube-proxy-kdsxp" Mar 7 01:10:08.293080 kubelet[3316]: I0307 01:10:08.292470 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwfg7\" (UniqueName: \"kubernetes.io/projected/5b861300-3a22-4d6c-8cbb-13975eb3d4a9-kube-api-access-dwfg7\") pod \"kube-proxy-kdsxp\" (UID: \"5b861300-3a22-4d6c-8cbb-13975eb3d4a9\") " pod="kube-system/kube-proxy-kdsxp" Mar 7 01:10:08.293080 kubelet[3316]: I0307 01:10:08.292496 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b861300-3a22-4d6c-8cbb-13975eb3d4a9-kube-proxy\") pod \"kube-proxy-kdsxp\" (UID: \"5b861300-3a22-4d6c-8cbb-13975eb3d4a9\") " pod="kube-system/kube-proxy-kdsxp" Mar 7 01:10:08.394119 kubelet[3316]: I0307 01:10:08.393097 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-lib-modules\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.394119 kubelet[3316]: I0307 01:10:08.394059 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-config-path\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.394434 kubelet[3316]: I0307 01:10:08.394231 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-hubble-tls\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.394911 kubelet[3316]: I0307 01:10:08.394605 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvst5\" (UniqueName: \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-kube-api-access-fvst5\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.394911 kubelet[3316]: I0307 01:10:08.394708 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-bpf-maps\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.394911 kubelet[3316]: I0307 01:10:08.394775 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92988e03-c98b-40f5-88ca-bebf8290ccdb-clustermesh-secrets\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.394911 kubelet[3316]: I0307 01:10:08.394851 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-net\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.395791 kubelet[3316]: I0307 01:10:08.394884 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-kernel\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.395791 kubelet[3316]: I0307 01:10:08.395209 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-cgroup\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.395791 kubelet[3316]: I0307 01:10:08.395509 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cni-path\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.395791 kubelet[3316]: I0307 01:10:08.395556 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-xtables-lock\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.395791 kubelet[3316]: I0307 01:10:08.395630 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-run\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.395791 kubelet[3316]: I0307 01:10:08.395653 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-hostproc\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.396349 kubelet[3316]: I0307 01:10:08.395677 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-etc-cni-netd\") pod \"cilium-gsvgl\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " pod="kube-system/cilium-gsvgl" Mar 7 01:10:08.449205 kubelet[3316]: E0307 01:10:08.449156 3316 status_manager.go:1045] "Failed to get status for pod" err="pods \"cilium-operator-78cf5644cb-cvnjl\" is forbidden: User \"system:node:ip-172-31-31-131\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-131' and this object" podUID="d99f49b0-42b6-44fc-ac84-48bc0ca83467" pod="kube-system/cilium-operator-78cf5644cb-cvnjl" Mar 7 01:10:08.455994 systemd[1]: Created slice kubepods-besteffort-podd99f49b0_42b6_44fc_ac84_48bc0ca83467.slice - libcontainer container kubepods-besteffort-podd99f49b0_42b6_44fc_ac84_48bc0ca83467.slice. Mar 7 01:10:08.575493 containerd[1984]: time="2026-03-07T01:10:08.574228236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsvgl,Uid:92988e03-c98b-40f5-88ca-bebf8290ccdb,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:08.575493 containerd[1984]: time="2026-03-07T01:10:08.574227921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdsxp,Uid:5b861300-3a22-4d6c-8cbb-13975eb3d4a9,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:08.600584 kubelet[3316]: I0307 01:10:08.597571 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99f49b0-42b6-44fc-ac84-48bc0ca83467-cilium-config-path\") pod \"cilium-operator-78cf5644cb-cvnjl\" (UID: \"d99f49b0-42b6-44fc-ac84-48bc0ca83467\") " pod="kube-system/cilium-operator-78cf5644cb-cvnjl" Mar 7 01:10:08.600584 kubelet[3316]: I0307 01:10:08.597642 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff7zv\" (UniqueName: \"kubernetes.io/projected/d99f49b0-42b6-44fc-ac84-48bc0ca83467-kube-api-access-ff7zv\") pod \"cilium-operator-78cf5644cb-cvnjl\" (UID: \"d99f49b0-42b6-44fc-ac84-48bc0ca83467\") " pod="kube-system/cilium-operator-78cf5644cb-cvnjl" Mar 7 01:10:08.611537 containerd[1984]: time="2026-03-07T01:10:08.611253094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:08.611537 containerd[1984]: time="2026-03-07T01:10:08.611386445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:08.611537 containerd[1984]: time="2026-03-07T01:10:08.611429379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:08.611787 containerd[1984]: time="2026-03-07T01:10:08.611609030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:08.640543 containerd[1984]: time="2026-03-07T01:10:08.640407985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:08.640543 containerd[1984]: time="2026-03-07T01:10:08.640488817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:08.640543 containerd[1984]: time="2026-03-07T01:10:08.640518439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:08.640927 systemd[1]: Started cri-containerd-41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055.scope - libcontainer container 41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055. Mar 7 01:10:08.642379 containerd[1984]: time="2026-03-07T01:10:08.640820074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:08.673246 systemd[1]: Started cri-containerd-5afd4875c5697c74660b7700208b4d6cc115c08963e3f1f3a695f037ad158f13.scope - libcontainer container 5afd4875c5697c74660b7700208b4d6cc115c08963e3f1f3a695f037ad158f13. Mar 7 01:10:08.690246 containerd[1984]: time="2026-03-07T01:10:08.690019823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsvgl,Uid:92988e03-c98b-40f5-88ca-bebf8290ccdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\"" Mar 7 01:10:08.693891 containerd[1984]: time="2026-03-07T01:10:08.693832629Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:10:08.762840 containerd[1984]: time="2026-03-07T01:10:08.762767554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdsxp,Uid:5b861300-3a22-4d6c-8cbb-13975eb3d4a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5afd4875c5697c74660b7700208b4d6cc115c08963e3f1f3a695f037ad158f13\"" Mar 7 01:10:08.763592 containerd[1984]: time="2026-03-07T01:10:08.763544268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-cvnjl,Uid:d99f49b0-42b6-44fc-ac84-48bc0ca83467,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:08.779830 containerd[1984]: time="2026-03-07T01:10:08.779679887Z" level=info msg="CreateContainer within sandbox \"5afd4875c5697c74660b7700208b4d6cc115c08963e3f1f3a695f037ad158f13\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:10:08.803642 containerd[1984]: time="2026-03-07T01:10:08.803273137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:08.803642 containerd[1984]: time="2026-03-07T01:10:08.803396406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:08.803642 containerd[1984]: time="2026-03-07T01:10:08.803420041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:08.804223 containerd[1984]: time="2026-03-07T01:10:08.804075095Z" level=info msg="CreateContainer within sandbox \"5afd4875c5697c74660b7700208b4d6cc115c08963e3f1f3a695f037ad158f13\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3092b5ad5d08d0b7ee82a8ddaa23192dea4c3f13122055468c7fbc92329dc277\"" Mar 7 01:10:08.805653 containerd[1984]: time="2026-03-07T01:10:08.805528997Z" level=info msg="StartContainer for \"3092b5ad5d08d0b7ee82a8ddaa23192dea4c3f13122055468c7fbc92329dc277\"" Mar 7 01:10:08.806270 containerd[1984]: time="2026-03-07T01:10:08.804501595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:08.843840 systemd[1]: Started cri-containerd-476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1.scope - libcontainer container 476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1. Mar 7 01:10:08.858788 systemd[1]: Started cri-containerd-3092b5ad5d08d0b7ee82a8ddaa23192dea4c3f13122055468c7fbc92329dc277.scope - libcontainer container 3092b5ad5d08d0b7ee82a8ddaa23192dea4c3f13122055468c7fbc92329dc277. Mar 7 01:10:08.925115 containerd[1984]: time="2026-03-07T01:10:08.925084183Z" level=info msg="StartContainer for \"3092b5ad5d08d0b7ee82a8ddaa23192dea4c3f13122055468c7fbc92329dc277\" returns successfully" Mar 7 01:10:08.925610 containerd[1984]: time="2026-03-07T01:10:08.925398787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-cvnjl,Uid:d99f49b0-42b6-44fc-ac84-48bc0ca83467,Namespace:kube-system,Attempt:0,} returns sandbox id \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\"" Mar 7 01:10:12.148796 kubelet[3316]: I0307 01:10:12.148732 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-kdsxp" podStartSLOduration=4.148712877 podStartE2EDuration="4.148712877s" podCreationTimestamp="2026-03-07 01:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:09.506762977 +0000 UTC m=+7.406621240" watchObservedRunningTime="2026-03-07 01:10:12.148712877 +0000 UTC m=+10.048571139" Mar 7 01:10:17.316666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2723518327.mount: Deactivated successfully. Mar 7 01:10:20.184053 containerd[1984]: time="2026-03-07T01:10:20.183821912Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:20.187817 containerd[1984]: time="2026-03-07T01:10:20.171893490Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:10:20.212864 containerd[1984]: time="2026-03-07T01:10:20.212803697Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:20.214961 containerd[1984]: time="2026-03-07T01:10:20.214894499Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.521014037s" Mar 7 01:10:20.214961 containerd[1984]: time="2026-03-07T01:10:20.214947099Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:10:20.222404 containerd[1984]: time="2026-03-07T01:10:20.222016247Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:10:20.230521 containerd[1984]: time="2026-03-07T01:10:20.230468192Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:10:20.339078 containerd[1984]: time="2026-03-07T01:10:20.338999098Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\"" Mar 7 01:10:20.340795 containerd[1984]: time="2026-03-07T01:10:20.339850798Z" level=info msg="StartContainer for \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\"" Mar 7 01:10:20.464913 systemd[1]: Started cri-containerd-9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564.scope - libcontainer container 9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564. Mar 7 01:10:20.507064 containerd[1984]: time="2026-03-07T01:10:20.506975279Z" level=info msg="StartContainer for \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\" returns successfully" Mar 7 01:10:20.523659 systemd[1]: cri-containerd-9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564.scope: Deactivated successfully. Mar 7 01:10:20.652304 containerd[1984]: time="2026-03-07T01:10:20.625536503Z" level=info msg="shim disconnected" id=9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564 namespace=k8s.io Mar 7 01:10:20.652304 containerd[1984]: time="2026-03-07T01:10:20.652303047Z" level=warning msg="cleaning up after shim disconnected" id=9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564 namespace=k8s.io Mar 7 01:10:20.652727 containerd[1984]: time="2026-03-07T01:10:20.652320886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:21.325779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564-rootfs.mount: Deactivated successfully. Mar 7 01:10:21.459371 containerd[1984]: time="2026-03-07T01:10:21.459022578Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:10:21.521684 containerd[1984]: time="2026-03-07T01:10:21.521428386Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\"" Mar 7 01:10:21.523582 containerd[1984]: time="2026-03-07T01:10:21.522479635Z" level=info msg="StartContainer for \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\"" Mar 7 01:10:21.581850 systemd[1]: Started cri-containerd-07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0.scope - libcontainer container 07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0. Mar 7 01:10:21.672280 containerd[1984]: time="2026-03-07T01:10:21.672233779Z" level=info msg="StartContainer for \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\" returns successfully" Mar 7 01:10:21.683805 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:10:21.685176 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:21.685274 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:21.692118 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:21.692471 systemd[1]: cri-containerd-07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0.scope: Deactivated successfully. Mar 7 01:10:21.786040 containerd[1984]: time="2026-03-07T01:10:21.785964504Z" level=info msg="shim disconnected" id=07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0 namespace=k8s.io Mar 7 01:10:21.786040 containerd[1984]: time="2026-03-07T01:10:21.786028503Z" level=warning msg="cleaning up after shim disconnected" id=07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0 namespace=k8s.io Mar 7 01:10:21.786040 containerd[1984]: time="2026-03-07T01:10:21.786041758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:21.814688 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:21.835662 containerd[1984]: time="2026-03-07T01:10:21.833874022Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:10:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:10:22.326765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0-rootfs.mount: Deactivated successfully. Mar 7 01:10:22.457306 containerd[1984]: time="2026-03-07T01:10:22.457244018Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:22.460440 containerd[1984]: time="2026-03-07T01:10:22.460140348Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:10:22.462909 containerd[1984]: time="2026-03-07T01:10:22.462007551Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:10:22.463036 containerd[1984]: time="2026-03-07T01:10:22.462904598Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:22.470119 containerd[1984]: time="2026-03-07T01:10:22.469978108Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.247912798s" Mar 7 01:10:22.470119 containerd[1984]: time="2026-03-07T01:10:22.470052605Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:10:22.485275 containerd[1984]: time="2026-03-07T01:10:22.485186115Z" level=info msg="CreateContainer within sandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:10:22.523580 containerd[1984]: time="2026-03-07T01:10:22.523128792Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\"" Mar 7 01:10:22.526904 containerd[1984]: time="2026-03-07T01:10:22.526869133Z" level=info msg="StartContainer for \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\"" Mar 7 01:10:22.529664 containerd[1984]: time="2026-03-07T01:10:22.529525518Z" level=info msg="CreateContainer within sandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\"" Mar 7 01:10:22.536849 containerd[1984]: time="2026-03-07T01:10:22.535814243Z" level=info msg="StartContainer for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\"" Mar 7 01:10:22.591769 systemd[1]: Started cri-containerd-30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735.scope - libcontainer container 30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735. Mar 7 01:10:22.602055 systemd[1]: Started cri-containerd-d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f.scope - libcontainer container d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f. Mar 7 01:10:22.652528 containerd[1984]: time="2026-03-07T01:10:22.652475695Z" level=info msg="StartContainer for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" returns successfully" Mar 7 01:10:22.661983 containerd[1984]: time="2026-03-07T01:10:22.661936909Z" level=info msg="StartContainer for \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\" returns successfully" Mar 7 01:10:22.667148 systemd[1]: cri-containerd-d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f.scope: Deactivated successfully. Mar 7 01:10:22.847073 containerd[1984]: time="2026-03-07T01:10:22.846907644Z" level=info msg="shim disconnected" id=d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f namespace=k8s.io Mar 7 01:10:22.847073 containerd[1984]: time="2026-03-07T01:10:22.846984993Z" level=warning msg="cleaning up after shim disconnected" id=d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f namespace=k8s.io Mar 7 01:10:22.847073 containerd[1984]: time="2026-03-07T01:10:22.847000725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:22.868248 containerd[1984]: time="2026-03-07T01:10:22.868098052Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:10:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:10:23.464042 containerd[1984]: time="2026-03-07T01:10:23.463991191Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:10:23.511318 containerd[1984]: time="2026-03-07T01:10:23.510955696Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\"" Mar 7 01:10:23.511332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1790261857.mount: Deactivated successfully. Mar 7 01:10:23.513622 containerd[1984]: time="2026-03-07T01:10:23.512612620Z" level=info msg="StartContainer for \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\"" Mar 7 01:10:23.558766 systemd[1]: Started cri-containerd-0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84.scope - libcontainer container 0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84. Mar 7 01:10:23.591166 systemd[1]: cri-containerd-0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84.scope: Deactivated successfully. Mar 7 01:10:23.596541 containerd[1984]: time="2026-03-07T01:10:23.596454106Z" level=info msg="StartContainer for \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\" returns successfully" Mar 7 01:10:23.637429 containerd[1984]: time="2026-03-07T01:10:23.637332433Z" level=info msg="shim disconnected" id=0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84 namespace=k8s.io Mar 7 01:10:23.637429 containerd[1984]: time="2026-03-07T01:10:23.637422234Z" level=warning msg="cleaning up after shim disconnected" id=0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84 namespace=k8s.io Mar 7 01:10:23.637429 containerd[1984]: time="2026-03-07T01:10:23.637434597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:10:23.909364 kubelet[3316]: I0307 01:10:23.909192 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-cvnjl" podStartSLOduration=2.365260003 podStartE2EDuration="15.909175626s" podCreationTimestamp="2026-03-07 01:10:08 +0000 UTC" firstStartedPulling="2026-03-07 01:10:08.931287877 +0000 UTC m=+6.831146127" lastFinishedPulling="2026-03-07 01:10:22.47520351 +0000 UTC m=+20.375061750" observedRunningTime="2026-03-07 01:10:23.683077496 +0000 UTC m=+21.582935760" watchObservedRunningTime="2026-03-07 01:10:23.909175626 +0000 UTC m=+21.809033890" Mar 7 01:10:24.328840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84-rootfs.mount: Deactivated successfully. Mar 7 01:10:24.490712 containerd[1984]: time="2026-03-07T01:10:24.489309930Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:10:24.538912 containerd[1984]: time="2026-03-07T01:10:24.538694807Z" level=info msg="CreateContainer within sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\"" Mar 7 01:10:24.542644 containerd[1984]: time="2026-03-07T01:10:24.539316527Z" level=info msg="StartContainer for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\"" Mar 7 01:10:24.614068 systemd[1]: Started cri-containerd-22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20.scope - libcontainer container 22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20. Mar 7 01:10:24.714455 containerd[1984]: time="2026-03-07T01:10:24.714407471Z" level=info msg="StartContainer for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" returns successfully" Mar 7 01:10:25.222255 kubelet[3316]: I0307 01:10:25.222061 3316 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 01:10:25.489695 systemd[1]: Created slice kubepods-burstable-pod8f379d63_3a94_4a1c_ba65_0fadd7b29c1a.slice - libcontainer container kubepods-burstable-pod8f379d63_3a94_4a1c_ba65_0fadd7b29c1a.slice. Mar 7 01:10:25.512416 systemd[1]: Created slice kubepods-burstable-pod9fdbd44f_4161_484b_9805_6d16c28c7cf4.slice - libcontainer container kubepods-burstable-pod9fdbd44f_4161_484b_9805_6d16c28c7cf4.slice. Mar 7 01:10:25.526386 kubelet[3316]: I0307 01:10:25.526173 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f379d63-3a94-4a1c-ba65-0fadd7b29c1a-config-volume\") pod \"coredns-7d764666f9-48p77\" (UID: \"8f379d63-3a94-4a1c-ba65-0fadd7b29c1a\") " pod="kube-system/coredns-7d764666f9-48p77" Mar 7 01:10:25.526386 kubelet[3316]: I0307 01:10:25.526214 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fdbd44f-4161-484b-9805-6d16c28c7cf4-config-volume\") pod \"coredns-7d764666f9-qmx6l\" (UID: \"9fdbd44f-4161-484b-9805-6d16c28c7cf4\") " pod="kube-system/coredns-7d764666f9-qmx6l" Mar 7 01:10:25.526386 kubelet[3316]: I0307 01:10:25.526249 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppjj\" (UniqueName: \"kubernetes.io/projected/8f379d63-3a94-4a1c-ba65-0fadd7b29c1a-kube-api-access-dppjj\") pod \"coredns-7d764666f9-48p77\" (UID: \"8f379d63-3a94-4a1c-ba65-0fadd7b29c1a\") " pod="kube-system/coredns-7d764666f9-48p77" Mar 7 01:10:25.526386 kubelet[3316]: I0307 01:10:25.526277 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4hm\" (UniqueName: \"kubernetes.io/projected/9fdbd44f-4161-484b-9805-6d16c28c7cf4-kube-api-access-mn4hm\") pod \"coredns-7d764666f9-qmx6l\" (UID: \"9fdbd44f-4161-484b-9805-6d16c28c7cf4\") " pod="kube-system/coredns-7d764666f9-qmx6l" Mar 7 01:10:25.569148 kubelet[3316]: I0307 01:10:25.569071 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-gsvgl" podStartSLOduration=1.783080491 podStartE2EDuration="17.569052611s" podCreationTimestamp="2026-03-07 01:10:08 +0000 UTC" firstStartedPulling="2026-03-07 01:10:08.693086255 +0000 UTC m=+6.592944513" lastFinishedPulling="2026-03-07 01:10:24.479058369 +0000 UTC m=+22.378916633" observedRunningTime="2026-03-07 01:10:25.565371555 +0000 UTC m=+23.465229818" watchObservedRunningTime="2026-03-07 01:10:25.569052611 +0000 UTC m=+23.468910875" Mar 7 01:10:25.814020 containerd[1984]: time="2026-03-07T01:10:25.813908475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-48p77,Uid:8f379d63-3a94-4a1c-ba65-0fadd7b29c1a,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:25.831239 containerd[1984]: time="2026-03-07T01:10:25.831189383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qmx6l,Uid:9fdbd44f-4161-484b-9805-6d16c28c7cf4,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:28.055591 systemd-networkd[1894]: cilium_host: Link UP Mar 7 01:10:28.056738 systemd-networkd[1894]: cilium_net: Link UP Mar 7 01:10:28.057392 systemd-networkd[1894]: cilium_net: Gained carrier Mar 7 01:10:28.057666 systemd-networkd[1894]: cilium_host: Gained carrier Mar 7 01:10:28.057715 (udev-worker)[4149]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:28.059753 (udev-worker)[4092]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:28.158850 systemd-networkd[1894]: cilium_net: Gained IPv6LL Mar 7 01:10:28.414703 systemd-networkd[1894]: cilium_vxlan: Link UP Mar 7 01:10:28.414714 systemd-networkd[1894]: cilium_vxlan: Gained carrier Mar 7 01:10:28.647687 systemd-networkd[1894]: cilium_host: Gained IPv6LL Mar 7 01:10:29.170609 kernel: NET: Registered PF_ALG protocol family Mar 7 01:10:29.606841 systemd-networkd[1894]: cilium_vxlan: Gained IPv6LL Mar 7 01:10:30.229219 systemd-networkd[1894]: lxc_health: Link UP Mar 7 01:10:30.238396 systemd-networkd[1894]: lxc_health: Gained carrier Mar 7 01:10:30.456034 systemd-networkd[1894]: lxc5c6fdca77896: Link UP Mar 7 01:10:30.460090 systemd-networkd[1894]: lxc09ee9d167093: Link UP Mar 7 01:10:30.460589 kernel: eth0: renamed from tmpa04c0 Mar 7 01:10:30.470569 kernel: eth0: renamed from tmp43ce6 Mar 7 01:10:30.475753 systemd-networkd[1894]: lxc09ee9d167093: Gained carrier Mar 7 01:10:30.477016 systemd-networkd[1894]: lxc5c6fdca77896: Gained carrier Mar 7 01:10:30.477888 (udev-worker)[4155]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:31.654754 systemd-networkd[1894]: lxc09ee9d167093: Gained IPv6LL Mar 7 01:10:31.910761 systemd-networkd[1894]: lxc_health: Gained IPv6LL Mar 7 01:10:31.975217 systemd-networkd[1894]: lxc5c6fdca77896: Gained IPv6LL Mar 7 01:10:33.985231 ntpd[1957]: Listen normally on 8 cilium_host 192.168.0.149:123 Mar 7 01:10:33.985999 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 8 cilium_host 192.168.0.149:123 Mar 7 01:10:33.985999 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 9 cilium_net [fe80::a065:32ff:feb6:da9b%4]:123 Mar 7 01:10:33.985999 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 10 cilium_host [fe80::a822:65ff:fed8:89f8%5]:123 Mar 7 01:10:33.985999 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 11 cilium_vxlan [fe80::d4c0:93ff:fe2e:37f4%6]:123 Mar 7 01:10:33.985999 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 12 lxc_health [fe80::d4d0:84ff:fe80:724e%8]:123 Mar 7 01:10:33.985999 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 13 lxc09ee9d167093 [fe80::5cc1:1fff:fe42:f042%10]:123 Mar 7 01:10:33.985330 ntpd[1957]: Listen normally on 9 cilium_net [fe80::a065:32ff:feb6:da9b%4]:123 Mar 7 01:10:33.985389 ntpd[1957]: Listen normally on 10 cilium_host [fe80::a822:65ff:fed8:89f8%5]:123 Mar 7 01:10:33.985432 ntpd[1957]: Listen normally on 11 cilium_vxlan [fe80::d4c0:93ff:fe2e:37f4%6]:123 Mar 7 01:10:33.985473 ntpd[1957]: Listen normally on 12 lxc_health [fe80::d4d0:84ff:fe80:724e%8]:123 Mar 7 01:10:33.985516 ntpd[1957]: Listen normally on 13 lxc09ee9d167093 [fe80::5cc1:1fff:fe42:f042%10]:123 Mar 7 01:10:33.986576 ntpd[1957]: Listen normally on 14 lxc5c6fdca77896 [fe80::44bb:17ff:fe06:59ed%12]:123 Mar 7 01:10:33.987582 ntpd[1957]: 7 Mar 01:10:33 ntpd[1957]: Listen normally on 14 lxc5c6fdca77896 [fe80::44bb:17ff:fe06:59ed%12]:123 Mar 7 01:10:35.132340 containerd[1984]: time="2026-03-07T01:10:35.132224531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:35.133495 containerd[1984]: time="2026-03-07T01:10:35.132944169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:35.133495 containerd[1984]: time="2026-03-07T01:10:35.133040921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:35.133495 containerd[1984]: time="2026-03-07T01:10:35.133397481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:35.177802 systemd[1]: Started cri-containerd-43ce603f19ddc71856db12b7693d1eae207eea014b2afe7edec600d7493b5ce0.scope - libcontainer container 43ce603f19ddc71856db12b7693d1eae207eea014b2afe7edec600d7493b5ce0. Mar 7 01:10:35.264157 containerd[1984]: time="2026-03-07T01:10:35.264043290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:35.265062 containerd[1984]: time="2026-03-07T01:10:35.265008564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:35.265198 containerd[1984]: time="2026-03-07T01:10:35.265172422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:35.265481 containerd[1984]: time="2026-03-07T01:10:35.265376336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:35.316624 containerd[1984]: time="2026-03-07T01:10:35.313919446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-48p77,Uid:8f379d63-3a94-4a1c-ba65-0fadd7b29c1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"43ce603f19ddc71856db12b7693d1eae207eea014b2afe7edec600d7493b5ce0\"" Mar 7 01:10:35.315817 systemd[1]: Started cri-containerd-a04c029e65b59f5083ee29629398abe88f45066d84217f0de934453ed9be54bb.scope - libcontainer container a04c029e65b59f5083ee29629398abe88f45066d84217f0de934453ed9be54bb. Mar 7 01:10:35.343122 containerd[1984]: time="2026-03-07T01:10:35.343074489Z" level=info msg="CreateContainer within sandbox \"43ce603f19ddc71856db12b7693d1eae207eea014b2afe7edec600d7493b5ce0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:10:35.385820 containerd[1984]: time="2026-03-07T01:10:35.385040909Z" level=info msg="CreateContainer within sandbox \"43ce603f19ddc71856db12b7693d1eae207eea014b2afe7edec600d7493b5ce0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e34aa827694e16927b39c864cf5b7db82538a8b4b34b2bdd1ded736b8f6383a\"" Mar 7 01:10:35.388796 containerd[1984]: time="2026-03-07T01:10:35.386885106Z" level=info msg="StartContainer for \"9e34aa827694e16927b39c864cf5b7db82538a8b4b34b2bdd1ded736b8f6383a\"" Mar 7 01:10:35.424637 containerd[1984]: time="2026-03-07T01:10:35.424584998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qmx6l,Uid:9fdbd44f-4161-484b-9805-6d16c28c7cf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04c029e65b59f5083ee29629398abe88f45066d84217f0de934453ed9be54bb\"" Mar 7 01:10:35.436727 containerd[1984]: time="2026-03-07T01:10:35.436671555Z" level=info msg="CreateContainer within sandbox \"a04c029e65b59f5083ee29629398abe88f45066d84217f0de934453ed9be54bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:10:35.453793 systemd[1]: Started cri-containerd-9e34aa827694e16927b39c864cf5b7db82538a8b4b34b2bdd1ded736b8f6383a.scope - libcontainer container 9e34aa827694e16927b39c864cf5b7db82538a8b4b34b2bdd1ded736b8f6383a. Mar 7 01:10:35.473652 containerd[1984]: time="2026-03-07T01:10:35.473586965Z" level=info msg="CreateContainer within sandbox \"a04c029e65b59f5083ee29629398abe88f45066d84217f0de934453ed9be54bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7f514ef5b4bee1edd34e28cc845d36c6ee0c536218d211f50280d9707b2751b8\"" Mar 7 01:10:35.475524 containerd[1984]: time="2026-03-07T01:10:35.475486192Z" level=info msg="StartContainer for \"7f514ef5b4bee1edd34e28cc845d36c6ee0c536218d211f50280d9707b2751b8\"" Mar 7 01:10:35.521538 containerd[1984]: time="2026-03-07T01:10:35.521498934Z" level=info msg="StartContainer for \"9e34aa827694e16927b39c864cf5b7db82538a8b4b34b2bdd1ded736b8f6383a\" returns successfully" Mar 7 01:10:35.522219 systemd[1]: Started cri-containerd-7f514ef5b4bee1edd34e28cc845d36c6ee0c536218d211f50280d9707b2751b8.scope - libcontainer container 7f514ef5b4bee1edd34e28cc845d36c6ee0c536218d211f50280d9707b2751b8. Mar 7 01:10:35.565374 containerd[1984]: time="2026-03-07T01:10:35.565321344Z" level=info msg="StartContainer for \"7f514ef5b4bee1edd34e28cc845d36c6ee0c536218d211f50280d9707b2751b8\" returns successfully" Mar 7 01:10:35.587082 kubelet[3316]: I0307 01:10:35.585928 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-qmx6l" podStartSLOduration=27.585907936 podStartE2EDuration="27.585907936s" podCreationTimestamp="2026-03-07 01:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:35.584402192 +0000 UTC m=+33.484260461" watchObservedRunningTime="2026-03-07 01:10:35.585907936 +0000 UTC m=+33.485766199" Mar 7 01:10:35.601329 kubelet[3316]: I0307 01:10:35.601253 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-48p77" podStartSLOduration=27.601223213 podStartE2EDuration="27.601223213s" podCreationTimestamp="2026-03-07 01:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:10:35.598968515 +0000 UTC m=+33.498826778" watchObservedRunningTime="2026-03-07 01:10:35.601223213 +0000 UTC m=+33.501081473" Mar 7 01:10:36.141977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986494075.mount: Deactivated successfully. Mar 7 01:10:41.100728 systemd[1]: Started sshd@9-172.31.31.131:22-68.220.241.50:51252.service - OpenSSH per-connection server daemon (68.220.241.50:51252). Mar 7 01:10:41.617001 sshd[4685]: Accepted publickey for core from 68.220.241.50 port 51252 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:41.621068 sshd[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:41.638390 systemd-logind[1964]: New session 10 of user core. Mar 7 01:10:41.644263 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:10:42.731838 sshd[4685]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:42.747697 systemd[1]: sshd@9-172.31.31.131:22-68.220.241.50:51252.service: Deactivated successfully. Mar 7 01:10:42.751333 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:10:42.754644 systemd-logind[1964]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:10:42.757820 systemd-logind[1964]: Removed session 10. Mar 7 01:10:47.821862 systemd[1]: Started sshd@10-172.31.31.131:22-68.220.241.50:34894.service - OpenSSH per-connection server daemon (68.220.241.50:34894). Mar 7 01:10:48.318301 sshd[4710]: Accepted publickey for core from 68.220.241.50 port 34894 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:48.318957 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:48.324877 systemd-logind[1964]: New session 11 of user core. Mar 7 01:10:48.333818 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:10:48.751224 sshd[4710]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:48.755849 systemd[1]: sshd@10-172.31.31.131:22-68.220.241.50:34894.service: Deactivated successfully. Mar 7 01:10:48.758182 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:10:48.759758 systemd-logind[1964]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:10:48.761253 systemd-logind[1964]: Removed session 11. Mar 7 01:10:53.842962 systemd[1]: Started sshd@11-172.31.31.131:22-68.220.241.50:44862.service - OpenSSH per-connection server daemon (68.220.241.50:44862). Mar 7 01:10:54.333498 sshd[4725]: Accepted publickey for core from 68.220.241.50 port 44862 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:54.334180 sshd[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:54.340227 systemd-logind[1964]: New session 12 of user core. Mar 7 01:10:54.344769 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:10:54.749847 sshd[4725]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:54.754419 systemd-logind[1964]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:10:54.755104 systemd[1]: sshd@11-172.31.31.131:22-68.220.241.50:44862.service: Deactivated successfully. Mar 7 01:10:54.758635 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:10:54.759942 systemd-logind[1964]: Removed session 12. Mar 7 01:10:54.840030 systemd[1]: Started sshd@12-172.31.31.131:22-68.220.241.50:44874.service - OpenSSH per-connection server daemon (68.220.241.50:44874). Mar 7 01:10:55.339068 sshd[4740]: Accepted publickey for core from 68.220.241.50 port 44874 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:55.340929 sshd[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:55.347636 systemd-logind[1964]: New session 13 of user core. Mar 7 01:10:55.351869 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:10:55.821176 sshd[4740]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:55.824840 systemd[1]: sshd@12-172.31.31.131:22-68.220.241.50:44874.service: Deactivated successfully. Mar 7 01:10:55.827623 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:10:55.829977 systemd-logind[1964]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:10:55.832351 systemd-logind[1964]: Removed session 13. Mar 7 01:10:55.910983 systemd[1]: Started sshd@13-172.31.31.131:22-68.220.241.50:44890.service - OpenSSH per-connection server daemon (68.220.241.50:44890). Mar 7 01:10:56.393800 sshd[4751]: Accepted publickey for core from 68.220.241.50 port 44890 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:56.394437 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:56.400891 systemd-logind[1964]: New session 14 of user core. Mar 7 01:10:56.409778 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:10:56.821135 sshd[4751]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:56.826391 systemd-logind[1964]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:10:56.826733 systemd[1]: sshd@13-172.31.31.131:22-68.220.241.50:44890.service: Deactivated successfully. Mar 7 01:10:56.829473 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:10:56.830764 systemd-logind[1964]: Removed session 14. Mar 7 01:11:02.022170 systemd[1]: Started sshd@14-172.31.31.131:22-68.220.241.50:44892.service - OpenSSH per-connection server daemon (68.220.241.50:44892). Mar 7 01:11:02.690421 sshd[4765]: Accepted publickey for core from 68.220.241.50 port 44892 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:02.695706 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:02.707355 systemd-logind[1964]: New session 15 of user core. Mar 7 01:11:02.711821 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:11:03.126221 sshd[4765]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:03.134912 systemd-logind[1964]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:11:03.135754 systemd[1]: sshd@14-172.31.31.131:22-68.220.241.50:44892.service: Deactivated successfully. Mar 7 01:11:03.140091 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:11:03.143433 systemd-logind[1964]: Removed session 15. Mar 7 01:11:08.221008 systemd[1]: Started sshd@15-172.31.31.131:22-68.220.241.50:43196.service - OpenSSH per-connection server daemon (68.220.241.50:43196). Mar 7 01:11:08.704674 sshd[4780]: Accepted publickey for core from 68.220.241.50 port 43196 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:08.705977 sshd[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:08.711875 systemd-logind[1964]: New session 16 of user core. Mar 7 01:11:08.715753 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:11:09.123835 sshd[4780]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:09.128113 systemd[1]: sshd@15-172.31.31.131:22-68.220.241.50:43196.service: Deactivated successfully. Mar 7 01:11:09.131705 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:11:09.132716 systemd-logind[1964]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:11:09.134233 systemd-logind[1964]: Removed session 16. Mar 7 01:11:09.216986 systemd[1]: Started sshd@16-172.31.31.131:22-68.220.241.50:43210.service - OpenSSH per-connection server daemon (68.220.241.50:43210). Mar 7 01:11:09.714038 sshd[4793]: Accepted publickey for core from 68.220.241.50 port 43210 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:09.715814 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:09.721880 systemd-logind[1964]: New session 17 of user core. Mar 7 01:11:09.726840 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:11:15.246460 sshd[4793]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:15.254099 systemd[1]: sshd@16-172.31.31.131:22-68.220.241.50:43210.service: Deactivated successfully. Mar 7 01:11:15.256652 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:11:15.258871 systemd-logind[1964]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:11:15.260936 systemd-logind[1964]: Removed session 17. Mar 7 01:11:15.340958 systemd[1]: Started sshd@17-172.31.31.131:22-68.220.241.50:60248.service - OpenSSH per-connection server daemon (68.220.241.50:60248). Mar 7 01:11:15.850595 sshd[4806]: Accepted publickey for core from 68.220.241.50 port 60248 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:15.852359 sshd[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:15.860036 systemd-logind[1964]: New session 18 of user core. Mar 7 01:11:15.862856 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:11:17.360383 sshd[4806]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:17.364424 systemd-logind[1964]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:11:17.365089 systemd[1]: sshd@17-172.31.31.131:22-68.220.241.50:60248.service: Deactivated successfully. Mar 7 01:11:17.368006 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:11:17.370960 systemd-logind[1964]: Removed session 18. Mar 7 01:11:17.454015 systemd[1]: Started sshd@18-172.31.31.131:22-68.220.241.50:60264.service - OpenSSH per-connection server daemon (68.220.241.50:60264). Mar 7 01:11:17.956179 sshd[4822]: Accepted publickey for core from 68.220.241.50 port 60264 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:17.957789 sshd[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:17.962958 systemd-logind[1964]: New session 19 of user core. Mar 7 01:11:17.968782 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:11:18.636459 sshd[4822]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:18.645192 systemd[1]: sshd@18-172.31.31.131:22-68.220.241.50:60264.service: Deactivated successfully. Mar 7 01:11:18.645838 systemd-logind[1964]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:11:18.651932 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:11:18.653761 systemd-logind[1964]: Removed session 19. Mar 7 01:11:18.726311 systemd[1]: Started sshd@19-172.31.31.131:22-68.220.241.50:60274.service - OpenSSH per-connection server daemon (68.220.241.50:60274). Mar 7 01:11:19.229609 sshd[4837]: Accepted publickey for core from 68.220.241.50 port 60274 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:19.230537 sshd[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:19.237071 systemd-logind[1964]: New session 20 of user core. Mar 7 01:11:19.241771 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:11:19.651139 sshd[4837]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:19.655902 systemd-logind[1964]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:11:19.657329 systemd[1]: sshd@19-172.31.31.131:22-68.220.241.50:60274.service: Deactivated successfully. Mar 7 01:11:19.659751 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:11:19.661611 systemd-logind[1964]: Removed session 20. Mar 7 01:11:24.740915 systemd[1]: Started sshd@20-172.31.31.131:22-68.220.241.50:40660.service - OpenSSH per-connection server daemon (68.220.241.50:40660). Mar 7 01:11:25.233245 sshd[4854]: Accepted publickey for core from 68.220.241.50 port 40660 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:25.234878 sshd[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:25.241499 systemd-logind[1964]: New session 21 of user core. Mar 7 01:11:25.248914 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:11:25.651127 sshd[4854]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:25.655614 systemd[1]: sshd@20-172.31.31.131:22-68.220.241.50:40660.service: Deactivated successfully. Mar 7 01:11:25.658362 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:11:25.660102 systemd-logind[1964]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:11:25.662252 systemd-logind[1964]: Removed session 21. Mar 7 01:11:30.741939 systemd[1]: Started sshd@21-172.31.31.131:22-68.220.241.50:40668.service - OpenSSH per-connection server daemon (68.220.241.50:40668). Mar 7 01:11:31.230591 sshd[4867]: Accepted publickey for core from 68.220.241.50 port 40668 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:31.231454 sshd[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:31.238035 systemd-logind[1964]: New session 22 of user core. Mar 7 01:11:31.239794 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:11:31.641359 sshd[4867]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:31.645982 systemd-logind[1964]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:11:31.646726 systemd[1]: sshd@21-172.31.31.131:22-68.220.241.50:40668.service: Deactivated successfully. Mar 7 01:11:31.649011 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:11:31.650348 systemd-logind[1964]: Removed session 22. Mar 7 01:11:31.738938 systemd[1]: Started sshd@22-172.31.31.131:22-68.220.241.50:40684.service - OpenSSH per-connection server daemon (68.220.241.50:40684). Mar 7 01:11:32.219790 sshd[4880]: Accepted publickey for core from 68.220.241.50 port 40684 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:32.221417 sshd[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:32.226111 systemd-logind[1964]: New session 23 of user core. Mar 7 01:11:32.238828 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:11:34.642588 containerd[1984]: time="2026-03-07T01:11:34.641207781Z" level=info msg="StopContainer for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" with timeout 30 (s)" Mar 7 01:11:34.645599 containerd[1984]: time="2026-03-07T01:11:34.643368435Z" level=info msg="Stop container \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" with signal terminated" Mar 7 01:11:34.643393 systemd[1]: run-containerd-runc-k8s.io-22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20-runc.iiAxpZ.mount: Deactivated successfully. Mar 7 01:11:34.670937 systemd[1]: cri-containerd-30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735.scope: Deactivated successfully. Mar 7 01:11:34.721362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735-rootfs.mount: Deactivated successfully. Mar 7 01:11:34.726482 containerd[1984]: time="2026-03-07T01:11:34.726398838Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:11:34.735066 containerd[1984]: time="2026-03-07T01:11:34.735027799Z" level=info msg="StopContainer for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" with timeout 2 (s)" Mar 7 01:11:34.735379 containerd[1984]: time="2026-03-07T01:11:34.735341434Z" level=info msg="Stop container \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" with signal terminated" Mar 7 01:11:34.745539 systemd-networkd[1894]: lxc_health: Link DOWN Mar 7 01:11:34.745563 systemd-networkd[1894]: lxc_health: Lost carrier Mar 7 01:11:34.765696 systemd[1]: cri-containerd-22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20.scope: Deactivated successfully. Mar 7 01:11:34.765993 systemd[1]: cri-containerd-22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20.scope: Consumed 8.595s CPU time. Mar 7 01:11:34.778137 containerd[1984]: time="2026-03-07T01:11:34.777885214Z" level=info msg="shim disconnected" id=30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735 namespace=k8s.io Mar 7 01:11:34.778137 containerd[1984]: time="2026-03-07T01:11:34.777950654Z" level=warning msg="cleaning up after shim disconnected" id=30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735 namespace=k8s.io Mar 7 01:11:34.778137 containerd[1984]: time="2026-03-07T01:11:34.777963882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:34.797087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20-rootfs.mount: Deactivated successfully. Mar 7 01:11:34.806853 containerd[1984]: time="2026-03-07T01:11:34.806790559Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:11:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:11:34.809007 containerd[1984]: time="2026-03-07T01:11:34.808920269Z" level=info msg="shim disconnected" id=22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20 namespace=k8s.io Mar 7 01:11:34.809136 containerd[1984]: time="2026-03-07T01:11:34.809007867Z" level=warning msg="cleaning up after shim disconnected" id=22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20 namespace=k8s.io Mar 7 01:11:34.809136 containerd[1984]: time="2026-03-07T01:11:34.809020969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:34.811641 containerd[1984]: time="2026-03-07T01:11:34.811604587Z" level=info msg="StopContainer for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" returns successfully" Mar 7 01:11:34.813214 containerd[1984]: time="2026-03-07T01:11:34.813181786Z" level=info msg="StopPodSandbox for \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\"" Mar 7 01:11:34.820052 containerd[1984]: time="2026-03-07T01:11:34.819868279Z" level=info msg="Container to stop \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:34.824359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1-shm.mount: Deactivated successfully. Mar 7 01:11:34.833576 containerd[1984]: time="2026-03-07T01:11:34.833505906Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:11:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:11:34.841467 containerd[1984]: time="2026-03-07T01:11:34.841419071Z" level=info msg="StopContainer for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" returns successfully" Mar 7 01:11:34.842674 containerd[1984]: time="2026-03-07T01:11:34.842095338Z" level=info msg="StopPodSandbox for \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\"" Mar 7 01:11:34.842674 containerd[1984]: time="2026-03-07T01:11:34.842140784Z" level=info msg="Container to stop \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:34.842674 containerd[1984]: time="2026-03-07T01:11:34.842158731Z" level=info msg="Container to stop \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:34.842674 containerd[1984]: time="2026-03-07T01:11:34.842172596Z" level=info msg="Container to stop \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:34.842674 containerd[1984]: time="2026-03-07T01:11:34.842186015Z" level=info msg="Container to stop \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:34.842674 containerd[1984]: time="2026-03-07T01:11:34.842199632Z" level=info msg="Container to stop \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:11:34.848011 systemd[1]: cri-containerd-476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1.scope: Deactivated successfully. Mar 7 01:11:34.861180 systemd[1]: cri-containerd-41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055.scope: Deactivated successfully. Mar 7 01:11:34.895637 containerd[1984]: time="2026-03-07T01:11:34.895281481Z" level=info msg="shim disconnected" id=476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1 namespace=k8s.io Mar 7 01:11:34.895637 containerd[1984]: time="2026-03-07T01:11:34.895345602Z" level=warning msg="cleaning up after shim disconnected" id=476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1 namespace=k8s.io Mar 7 01:11:34.895637 containerd[1984]: time="2026-03-07T01:11:34.895358289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:34.902447 containerd[1984]: time="2026-03-07T01:11:34.902376392Z" level=info msg="shim disconnected" id=41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055 namespace=k8s.io Mar 7 01:11:34.902447 containerd[1984]: time="2026-03-07T01:11:34.902441884Z" level=warning msg="cleaning up after shim disconnected" id=41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055 namespace=k8s.io Mar 7 01:11:34.902447 containerd[1984]: time="2026-03-07T01:11:34.902453369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:34.934099 containerd[1984]: time="2026-03-07T01:11:34.932998712Z" level=info msg="TearDown network for sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" successfully" Mar 7 01:11:34.934099 containerd[1984]: time="2026-03-07T01:11:34.933046393Z" level=info msg="StopPodSandbox for \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" returns successfully" Mar 7 01:11:34.934441 containerd[1984]: time="2026-03-07T01:11:34.934387053Z" level=info msg="TearDown network for sandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" successfully" Mar 7 01:11:34.934527 containerd[1984]: time="2026-03-07T01:11:34.934510426Z" level=info msg="StopPodSandbox for \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" returns successfully" Mar 7 01:11:35.073564 kubelet[3316]: I0307 01:11:35.073499 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-config-path\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074048 kubelet[3316]: I0307 01:11:35.073625 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-cgroup\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074048 kubelet[3316]: I0307 01:11:35.073652 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-xtables-lock\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074048 kubelet[3316]: I0307 01:11:35.073677 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-hostproc\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-hostproc\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074048 kubelet[3316]: I0307 01:11:35.073719 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cni-path\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cni-path\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074048 kubelet[3316]: I0307 01:11:35.073751 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/d99f49b0-42b6-44fc-ac84-48bc0ca83467-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99f49b0-42b6-44fc-ac84-48bc0ca83467-cilium-config-path\") pod \"d99f49b0-42b6-44fc-ac84-48bc0ca83467\" (UID: \"d99f49b0-42b6-44fc-ac84-48bc0ca83467\") " Mar 7 01:11:35.074233 kubelet[3316]: I0307 01:11:35.073779 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-bpf-maps\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074233 kubelet[3316]: I0307 01:11:35.073824 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-net\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074233 kubelet[3316]: I0307 01:11:35.073852 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-kernel\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074233 kubelet[3316]: I0307 01:11:35.073880 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-etc-cni-netd\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.074233 kubelet[3316]: I0307 01:11:35.073905 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-run\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-run\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.098574 kubelet[3316]: I0307 01:11:35.094873 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-config-path" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:11:35.098574 kubelet[3316]: I0307 01:11:35.097393 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/d99f49b0-42b6-44fc-ac84-48bc0ca83467-kube-api-access-ff7zv\" (UniqueName: \"kubernetes.io/projected/d99f49b0-42b6-44fc-ac84-48bc0ca83467-kube-api-access-ff7zv\") pod \"d99f49b0-42b6-44fc-ac84-48bc0ca83467\" (UID: \"d99f49b0-42b6-44fc-ac84-48bc0ca83467\") " Mar 7 01:11:35.098574 kubelet[3316]: I0307 01:11:35.097441 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-lib-modules\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-lib-modules\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.098574 kubelet[3316]: I0307 01:11:35.097477 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-hubble-tls\" (UniqueName: \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-hubble-tls\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.098574 kubelet[3316]: I0307 01:11:35.097507 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-kube-api-access-fvst5\" (UniqueName: \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-kube-api-access-fvst5\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.098919 kubelet[3316]: I0307 01:11:35.097538 3316 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/92988e03-c98b-40f5-88ca-bebf8290ccdb-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92988e03-c98b-40f5-88ca-bebf8290ccdb-clustermesh-secrets\") pod \"92988e03-c98b-40f5-88ca-bebf8290ccdb\" (UID: \"92988e03-c98b-40f5-88ca-bebf8290ccdb\") " Mar 7 01:11:35.098919 kubelet[3316]: I0307 01:11:35.097622 3316 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-config-path\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.099123 kubelet[3316]: I0307 01:11:35.099089 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-cgroup" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.099242 kubelet[3316]: I0307 01:11:35.099226 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-xtables-lock" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.099327 kubelet[3316]: I0307 01:11:35.099313 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-hostproc" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.099422 kubelet[3316]: I0307 01:11:35.099408 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cni-path" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.100983 kubelet[3316]: I0307 01:11:35.100947 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92988e03-c98b-40f5-88ca-bebf8290ccdb-clustermesh-secrets" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:11:35.101102 kubelet[3316]: I0307 01:11:35.101008 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-lib-modules" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.102624 kubelet[3316]: I0307 01:11:35.102429 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d99f49b0-42b6-44fc-ac84-48bc0ca83467-cilium-config-path" pod "d99f49b0-42b6-44fc-ac84-48bc0ca83467" (UID: "d99f49b0-42b6-44fc-ac84-48bc0ca83467"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:11:35.102624 kubelet[3316]: I0307 01:11:35.102479 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-bpf-maps" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.102624 kubelet[3316]: I0307 01:11:35.102498 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-net" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.102624 kubelet[3316]: I0307 01:11:35.102517 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-kernel" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.102624 kubelet[3316]: I0307 01:11:35.102534 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-etc-cni-netd" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.103521 kubelet[3316]: I0307 01:11:35.102968 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-run" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:11:35.103521 kubelet[3316]: I0307 01:11:35.103087 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d99f49b0-42b6-44fc-ac84-48bc0ca83467-kube-api-access-ff7zv" pod "d99f49b0-42b6-44fc-ac84-48bc0ca83467" (UID: "d99f49b0-42b6-44fc-ac84-48bc0ca83467"). InnerVolumeSpecName "kube-api-access-ff7zv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:11:35.106242 kubelet[3316]: I0307 01:11:35.106208 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-hubble-tls" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:11:35.106639 kubelet[3316]: I0307 01:11:35.106609 3316 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-kube-api-access-fvst5" pod "92988e03-c98b-40f5-88ca-bebf8290ccdb" (UID: "92988e03-c98b-40f5-88ca-bebf8290ccdb"). InnerVolumeSpecName "kube-api-access-fvst5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198038 3316 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-cgroup\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198083 3316 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-xtables-lock\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198095 3316 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-hostproc\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198105 3316 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cni-path\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198116 3316 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d99f49b0-42b6-44fc-ac84-48bc0ca83467-cilium-config-path\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198126 3316 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-bpf-maps\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.198179 kubelet[3316]: I0307 01:11:35.198151 3316 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-net\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199531 kubelet[3316]: I0307 01:11:35.199491 3316 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-host-proc-sys-kernel\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199531 kubelet[3316]: I0307 01:11:35.199531 3316 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-etc-cni-netd\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199680 kubelet[3316]: I0307 01:11:35.199543 3316 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-cilium-run\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199680 kubelet[3316]: I0307 01:11:35.199568 3316 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ff7zv\" (UniqueName: \"kubernetes.io/projected/d99f49b0-42b6-44fc-ac84-48bc0ca83467-kube-api-access-ff7zv\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199680 kubelet[3316]: I0307 01:11:35.199579 3316 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92988e03-c98b-40f5-88ca-bebf8290ccdb-lib-modules\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199680 kubelet[3316]: I0307 01:11:35.199592 3316 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-hubble-tls\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199680 kubelet[3316]: I0307 01:11:35.199602 3316 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fvst5\" (UniqueName: \"kubernetes.io/projected/92988e03-c98b-40f5-88ca-bebf8290ccdb-kube-api-access-fvst5\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.199680 kubelet[3316]: I0307 01:11:35.199615 3316 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92988e03-c98b-40f5-88ca-bebf8290ccdb-clustermesh-secrets\") on node \"ip-172-31-31-131\" DevicePath \"\"" Mar 7 01:11:35.330812 systemd[1]: Removed slice kubepods-besteffort-podd99f49b0_42b6_44fc_ac84_48bc0ca83467.slice - libcontainer container kubepods-besteffort-podd99f49b0_42b6_44fc_ac84_48bc0ca83467.slice. Mar 7 01:11:35.332790 systemd[1]: Removed slice kubepods-burstable-pod92988e03_c98b_40f5_88ca_bebf8290ccdb.slice - libcontainer container kubepods-burstable-pod92988e03_c98b_40f5_88ca_bebf8290ccdb.slice. Mar 7 01:11:35.333104 systemd[1]: kubepods-burstable-pod92988e03_c98b_40f5_88ca_bebf8290ccdb.slice: Consumed 8.702s CPU time. Mar 7 01:11:35.624615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1-rootfs.mount: Deactivated successfully. Mar 7 01:11:35.624966 systemd[1]: var-lib-kubelet-pods-d99f49b0\x2d42b6\x2d44fc\x2dac84\x2d48bc0ca83467-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dff7zv.mount: Deactivated successfully. Mar 7 01:11:35.625065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055-rootfs.mount: Deactivated successfully. Mar 7 01:11:35.625148 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055-shm.mount: Deactivated successfully. Mar 7 01:11:35.625238 systemd[1]: var-lib-kubelet-pods-92988e03\x2dc98b\x2d40f5\x2d88ca\x2dbebf8290ccdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvst5.mount: Deactivated successfully. Mar 7 01:11:35.625330 systemd[1]: var-lib-kubelet-pods-92988e03\x2dc98b\x2d40f5\x2d88ca\x2dbebf8290ccdb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 01:11:35.625422 systemd[1]: var-lib-kubelet-pods-92988e03\x2dc98b\x2d40f5\x2d88ca\x2dbebf8290ccdb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 01:11:35.746378 kubelet[3316]: I0307 01:11:35.745006 3316 scope.go:122] "RemoveContainer" containerID="30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735" Mar 7 01:11:35.755755 containerd[1984]: time="2026-03-07T01:11:35.755688920Z" level=info msg="RemoveContainer for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\"" Mar 7 01:11:35.763044 containerd[1984]: time="2026-03-07T01:11:35.762855771Z" level=info msg="RemoveContainer for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" returns successfully" Mar 7 01:11:35.765189 kubelet[3316]: I0307 01:11:35.765140 3316 scope.go:122] "RemoveContainer" containerID="30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735" Mar 7 01:11:35.784670 containerd[1984]: time="2026-03-07T01:11:35.768424321Z" level=error msg="ContainerStatus for \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\": not found" Mar 7 01:11:35.797539 kubelet[3316]: E0307 01:11:35.796889 3316 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\": not found" containerID="30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735" Mar 7 01:11:35.799169 kubelet[3316]: I0307 01:11:35.799085 3316 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735"} err="failed to get container status \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\": rpc error: code = NotFound desc = an error occurred when try to find container \"30ae799a869a88c59e9ed120899966d4b780f8f86f565d9ea95cf144fe9df735\": not found" Mar 7 01:11:35.799169 kubelet[3316]: I0307 01:11:35.799171 3316 scope.go:122] "RemoveContainer" containerID="22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20" Mar 7 01:11:35.805196 containerd[1984]: time="2026-03-07T01:11:35.805139670Z" level=info msg="RemoveContainer for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\"" Mar 7 01:11:35.808998 containerd[1984]: time="2026-03-07T01:11:35.808942517Z" level=info msg="RemoveContainer for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" returns successfully" Mar 7 01:11:35.810929 kubelet[3316]: I0307 01:11:35.809642 3316 scope.go:122] "RemoveContainer" containerID="0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84" Mar 7 01:11:35.813569 containerd[1984]: time="2026-03-07T01:11:35.812615162Z" level=info msg="RemoveContainer for \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\"" Mar 7 01:11:35.816495 containerd[1984]: time="2026-03-07T01:11:35.816441064Z" level=info msg="RemoveContainer for \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\" returns successfully" Mar 7 01:11:35.817863 kubelet[3316]: I0307 01:11:35.817821 3316 scope.go:122] "RemoveContainer" containerID="d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f" Mar 7 01:11:35.820501 containerd[1984]: time="2026-03-07T01:11:35.820454582Z" level=info msg="RemoveContainer for \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\"" Mar 7 01:11:35.824796 containerd[1984]: time="2026-03-07T01:11:35.824659335Z" level=info msg="RemoveContainer for \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\" returns successfully" Mar 7 01:11:35.824965 kubelet[3316]: I0307 01:11:35.824936 3316 scope.go:122] "RemoveContainer" containerID="07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0" Mar 7 01:11:35.826540 containerd[1984]: time="2026-03-07T01:11:35.826186594Z" level=info msg="RemoveContainer for \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\"" Mar 7 01:11:35.834656 containerd[1984]: time="2026-03-07T01:11:35.834613480Z" level=info msg="RemoveContainer for \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\" returns successfully" Mar 7 01:11:35.837570 kubelet[3316]: I0307 01:11:35.834840 3316 scope.go:122] "RemoveContainer" containerID="9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564" Mar 7 01:11:35.837713 containerd[1984]: time="2026-03-07T01:11:35.836457553Z" level=info msg="RemoveContainer for \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\"" Mar 7 01:11:35.844410 containerd[1984]: time="2026-03-07T01:11:35.844368149Z" level=info msg="RemoveContainer for \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\" returns successfully" Mar 7 01:11:35.844917 kubelet[3316]: I0307 01:11:35.844809 3316 scope.go:122] "RemoveContainer" containerID="22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20" Mar 7 01:11:35.845900 containerd[1984]: time="2026-03-07T01:11:35.845860161Z" level=error msg="ContainerStatus for \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\": not found" Mar 7 01:11:35.846168 kubelet[3316]: E0307 01:11:35.846129 3316 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\": not found" containerID="22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20" Mar 7 01:11:35.846250 kubelet[3316]: I0307 01:11:35.846179 3316 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20"} err="failed to get container status \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\": rpc error: code = NotFound desc = an error occurred when try to find container \"22d6da250aa9b4dedc4ec22c15159165982893435db20c7c88a4cf64471b8c20\": not found" Mar 7 01:11:35.846250 kubelet[3316]: I0307 01:11:35.846206 3316 scope.go:122] "RemoveContainer" containerID="0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84" Mar 7 01:11:35.847684 containerd[1984]: time="2026-03-07T01:11:35.847640284Z" level=error msg="ContainerStatus for \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\": not found" Mar 7 01:11:35.848020 kubelet[3316]: E0307 01:11:35.847872 3316 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\": not found" containerID="0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84" Mar 7 01:11:35.848020 kubelet[3316]: I0307 01:11:35.847975 3316 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84"} err="failed to get container status \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b547c89ea5221ba3439437273187ed7eeca2212ba1eba5dda10e55b22599c84\": not found" Mar 7 01:11:35.848020 kubelet[3316]: I0307 01:11:35.847999 3316 scope.go:122] "RemoveContainer" containerID="d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f" Mar 7 01:11:35.848277 containerd[1984]: time="2026-03-07T01:11:35.848225791Z" level=error msg="ContainerStatus for \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\": not found" Mar 7 01:11:35.848453 kubelet[3316]: E0307 01:11:35.848423 3316 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\": not found" containerID="d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f" Mar 7 01:11:35.848518 kubelet[3316]: I0307 01:11:35.848463 3316 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f"} err="failed to get container status \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d005a39de75fe248a0ee59b762e15701bfc95cced05aad8e345831ed290d9d8f\": not found" Mar 7 01:11:35.848518 kubelet[3316]: I0307 01:11:35.848485 3316 scope.go:122] "RemoveContainer" containerID="07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0" Mar 7 01:11:35.848740 containerd[1984]: time="2026-03-07T01:11:35.848699554Z" level=error msg="ContainerStatus for \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\": not found" Mar 7 01:11:35.848895 kubelet[3316]: E0307 01:11:35.848865 3316 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\": not found" containerID="07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0" Mar 7 01:11:35.848962 kubelet[3316]: I0307 01:11:35.848898 3316 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0"} err="failed to get container status \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"07905af5a819f565ebe9fe9c926015aecd7376eb388d3eeda0113929193fb9b0\": not found" Mar 7 01:11:35.848962 kubelet[3316]: I0307 01:11:35.848918 3316 scope.go:122] "RemoveContainer" containerID="9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564" Mar 7 01:11:35.849160 containerd[1984]: time="2026-03-07T01:11:35.849121130Z" level=error msg="ContainerStatus for \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\": not found" Mar 7 01:11:35.849295 kubelet[3316]: E0307 01:11:35.849264 3316 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\": not found" containerID="9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564" Mar 7 01:11:35.849351 kubelet[3316]: I0307 01:11:35.849296 3316 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564"} err="failed to get container status \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\": rpc error: code = NotFound desc = an error occurred when try to find container \"9e47b2ca99fb7e245a29078931364a8c8d1c456a0e8316c4bdd4010e93a8b564\": not found" Mar 7 01:11:36.572302 sshd[4880]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:36.578272 systemd[1]: sshd@22-172.31.31.131:22-68.220.241.50:40684.service: Deactivated successfully. Mar 7 01:11:36.578815 systemd-logind[1964]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:11:36.581732 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:11:36.582186 systemd[1]: session-23.scope: Consumed 1.433s CPU time. Mar 7 01:11:36.583339 systemd-logind[1964]: Removed session 23. Mar 7 01:11:36.664029 systemd[1]: Started sshd@23-172.31.31.131:22-68.220.241.50:48816.service - OpenSSH per-connection server daemon (68.220.241.50:48816). Mar 7 01:11:36.985095 ntpd[1957]: Deleting interface #12 lxc_health, fe80::d4d0:84ff:fe80:724e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Mar 7 01:11:36.985449 ntpd[1957]: 7 Mar 01:11:36 ntpd[1957]: Deleting interface #12 lxc_health, fe80::d4d0:84ff:fe80:724e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=63 secs Mar 7 01:11:37.143596 sshd[5040]: Accepted publickey for core from 68.220.241.50 port 48816 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:37.145240 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:37.151429 systemd-logind[1964]: New session 24 of user core. Mar 7 01:11:37.154791 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:11:37.320980 kubelet[3316]: I0307 01:11:37.319781 3316 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92988e03-c98b-40f5-88ca-bebf8290ccdb" path="/var/lib/kubelet/pods/92988e03-c98b-40f5-88ca-bebf8290ccdb/volumes" Mar 7 01:11:37.320980 kubelet[3316]: I0307 01:11:37.320666 3316 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d99f49b0-42b6-44fc-ac84-48bc0ca83467" path="/var/lib/kubelet/pods/d99f49b0-42b6-44fc-ac84-48bc0ca83467/volumes" Mar 7 01:11:38.396439 systemd[1]: Created slice kubepods-burstable-podbebf0f07_75b2_4a1d_b74c_847d612255e8.slice - libcontainer container kubepods-burstable-podbebf0f07_75b2_4a1d_b74c_847d612255e8.slice. Mar 7 01:11:38.398584 sshd[5040]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:38.407165 systemd[1]: sshd@23-172.31.31.131:22-68.220.241.50:48816.service: Deactivated successfully. Mar 7 01:11:38.414172 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:11:38.418600 systemd-logind[1964]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:11:38.424705 systemd-logind[1964]: Removed session 24. Mar 7 01:11:38.491074 systemd[1]: Started sshd@24-172.31.31.131:22-68.220.241.50:48818.service - OpenSSH per-connection server daemon (68.220.241.50:48818). Mar 7 01:11:38.522867 kubelet[3316]: I0307 01:11:38.522312 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-bpf-maps\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.522867 kubelet[3316]: I0307 01:11:38.522359 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-cni-path\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.522867 kubelet[3316]: I0307 01:11:38.522374 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-xtables-lock\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.522867 kubelet[3316]: I0307 01:11:38.522412 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bebf0f07-75b2-4a1d-b74c-847d612255e8-clustermesh-secrets\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.522867 kubelet[3316]: I0307 01:11:38.522429 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-host-proc-sys-net\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.522867 kubelet[3316]: I0307 01:11:38.522445 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd99q\" (UniqueName: \"kubernetes.io/projected/bebf0f07-75b2-4a1d-b74c-847d612255e8-kube-api-access-xd99q\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523503 kubelet[3316]: I0307 01:11:38.522462 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-lib-modules\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523503 kubelet[3316]: I0307 01:11:38.522493 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-host-proc-sys-kernel\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523503 kubelet[3316]: I0307 01:11:38.522509 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-cilium-cgroup\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523503 kubelet[3316]: I0307 01:11:38.522524 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-etc-cni-netd\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523503 kubelet[3316]: I0307 01:11:38.522539 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bebf0f07-75b2-4a1d-b74c-847d612255e8-cilium-config-path\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523503 kubelet[3316]: I0307 01:11:38.522596 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-cilium-run\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523756 kubelet[3316]: I0307 01:11:38.522613 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bebf0f07-75b2-4a1d-b74c-847d612255e8-hostproc\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523756 kubelet[3316]: I0307 01:11:38.522630 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bebf0f07-75b2-4a1d-b74c-847d612255e8-cilium-ipsec-secrets\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.523756 kubelet[3316]: I0307 01:11:38.522644 3316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bebf0f07-75b2-4a1d-b74c-847d612255e8-hubble-tls\") pod \"cilium-995rb\" (UID: \"bebf0f07-75b2-4a1d-b74c-847d612255e8\") " pod="kube-system/cilium-995rb" Mar 7 01:11:38.545768 kubelet[3316]: E0307 01:11:38.545709 3316 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:11:38.724630 containerd[1984]: time="2026-03-07T01:11:38.724585142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-995rb,Uid:bebf0f07-75b2-4a1d-b74c-847d612255e8,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:38.763517 containerd[1984]: time="2026-03-07T01:11:38.763383220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:38.763517 containerd[1984]: time="2026-03-07T01:11:38.763450334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:38.763517 containerd[1984]: time="2026-03-07T01:11:38.763468274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:38.764041 containerd[1984]: time="2026-03-07T01:11:38.763613464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:38.787820 systemd[1]: Started cri-containerd-3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526.scope - libcontainer container 3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526. Mar 7 01:11:38.818599 containerd[1984]: time="2026-03-07T01:11:38.818483897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-995rb,Uid:bebf0f07-75b2-4a1d-b74c-847d612255e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\"" Mar 7 01:11:38.834607 containerd[1984]: time="2026-03-07T01:11:38.833815428Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:11:38.854179 containerd[1984]: time="2026-03-07T01:11:38.854130802Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99\"" Mar 7 01:11:38.854869 containerd[1984]: time="2026-03-07T01:11:38.854837041Z" level=info msg="StartContainer for \"0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99\"" Mar 7 01:11:38.886769 systemd[1]: Started cri-containerd-0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99.scope - libcontainer container 0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99. Mar 7 01:11:38.922408 containerd[1984]: time="2026-03-07T01:11:38.921754360Z" level=info msg="StartContainer for \"0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99\" returns successfully" Mar 7 01:11:38.938932 systemd[1]: cri-containerd-0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99.scope: Deactivated successfully. Mar 7 01:11:38.996383 containerd[1984]: time="2026-03-07T01:11:38.996227493Z" level=info msg="shim disconnected" id=0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99 namespace=k8s.io Mar 7 01:11:38.996383 containerd[1984]: time="2026-03-07T01:11:38.996293326Z" level=warning msg="cleaning up after shim disconnected" id=0cf1f27072ff84cbc6586987509a4ea253b78e36ccd90142147746dd89539c99 namespace=k8s.io Mar 7 01:11:38.996383 containerd[1984]: time="2026-03-07T01:11:38.996307601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:39.000792 sshd[5052]: Accepted publickey for core from 68.220.241.50 port 48818 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:39.004527 sshd[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:39.014800 systemd-logind[1964]: New session 25 of user core. Mar 7 01:11:39.020239 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:11:39.355135 sshd[5052]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:39.360693 systemd[1]: sshd@24-172.31.31.131:22-68.220.241.50:48818.service: Deactivated successfully. Mar 7 01:11:39.363394 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:11:39.364798 systemd-logind[1964]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:11:39.365999 systemd-logind[1964]: Removed session 25. Mar 7 01:11:39.448910 systemd[1]: Started sshd@25-172.31.31.131:22-68.220.241.50:48826.service - OpenSSH per-connection server daemon (68.220.241.50:48826). Mar 7 01:11:39.800876 containerd[1984]: time="2026-03-07T01:11:39.800829588Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:11:39.827829 containerd[1984]: time="2026-03-07T01:11:39.827751858Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b\"" Mar 7 01:11:39.829824 containerd[1984]: time="2026-03-07T01:11:39.828649265Z" level=info msg="StartContainer for \"368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b\"" Mar 7 01:11:39.880791 systemd[1]: Started cri-containerd-368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b.scope - libcontainer container 368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b. Mar 7 01:11:39.917070 containerd[1984]: time="2026-03-07T01:11:39.916951562Z" level=info msg="StartContainer for \"368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b\" returns successfully" Mar 7 01:11:39.941168 sshd[5171]: Accepted publickey for core from 68.220.241.50 port 48826 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:39.943770 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:39.958282 systemd-logind[1964]: New session 26 of user core. Mar 7 01:11:39.962018 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:11:40.114427 systemd[1]: cri-containerd-368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b.scope: Deactivated successfully. Mar 7 01:11:40.160332 containerd[1984]: time="2026-03-07T01:11:40.160259279Z" level=info msg="shim disconnected" id=368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b namespace=k8s.io Mar 7 01:11:40.160673 containerd[1984]: time="2026-03-07T01:11:40.160642707Z" level=warning msg="cleaning up after shim disconnected" id=368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b namespace=k8s.io Mar 7 01:11:40.160985 containerd[1984]: time="2026-03-07T01:11:40.160741429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:40.634500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-368a6fbea4d0722c17338522408a2f920ba867ee5058e2f0dfda1a18564a420b-rootfs.mount: Deactivated successfully. Mar 7 01:11:40.810931 containerd[1984]: time="2026-03-07T01:11:40.810875321Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:11:40.840303 containerd[1984]: time="2026-03-07T01:11:40.840258581Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75\"" Mar 7 01:11:40.842004 containerd[1984]: time="2026-03-07T01:11:40.841928061Z" level=info msg="StartContainer for \"728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75\"" Mar 7 01:11:40.907801 systemd[1]: Started cri-containerd-728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75.scope - libcontainer container 728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75. Mar 7 01:11:41.005377 containerd[1984]: time="2026-03-07T01:11:41.005330650Z" level=info msg="StartContainer for \"728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75\" returns successfully" Mar 7 01:11:41.260671 systemd[1]: cri-containerd-728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75.scope: Deactivated successfully. Mar 7 01:11:41.301085 containerd[1984]: time="2026-03-07T01:11:41.300992393Z" level=info msg="shim disconnected" id=728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75 namespace=k8s.io Mar 7 01:11:41.301085 containerd[1984]: time="2026-03-07T01:11:41.301054890Z" level=warning msg="cleaning up after shim disconnected" id=728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75 namespace=k8s.io Mar 7 01:11:41.301085 containerd[1984]: time="2026-03-07T01:11:41.301072552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:41.634501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-728c9f41d14437f0fc5e7c3c83922ced9f822059aba59563486534ae641c2f75-rootfs.mount: Deactivated successfully. Mar 7 01:11:41.808933 containerd[1984]: time="2026-03-07T01:11:41.808872435Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:11:41.841972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687812549.mount: Deactivated successfully. Mar 7 01:11:41.845814 containerd[1984]: time="2026-03-07T01:11:41.845763893Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee\"" Mar 7 01:11:41.846949 containerd[1984]: time="2026-03-07T01:11:41.846485016Z" level=info msg="StartContainer for \"867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee\"" Mar 7 01:11:41.883775 systemd[1]: Started cri-containerd-867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee.scope - libcontainer container 867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee. Mar 7 01:11:41.914881 systemd[1]: cri-containerd-867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee.scope: Deactivated successfully. Mar 7 01:11:41.919309 containerd[1984]: time="2026-03-07T01:11:41.919106969Z" level=info msg="StartContainer for \"867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee\" returns successfully" Mar 7 01:11:41.970614 containerd[1984]: time="2026-03-07T01:11:41.970319267Z" level=info msg="shim disconnected" id=867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee namespace=k8s.io Mar 7 01:11:41.970614 containerd[1984]: time="2026-03-07T01:11:41.970381825Z" level=warning msg="cleaning up after shim disconnected" id=867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee namespace=k8s.io Mar 7 01:11:41.970614 containerd[1984]: time="2026-03-07T01:11:41.970394701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:42.634524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-867246486dca5d9020d6067793c2995bc600e5bff3c6299032f480be76e165ee-rootfs.mount: Deactivated successfully. Mar 7 01:11:42.817824 containerd[1984]: time="2026-03-07T01:11:42.817782493Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:11:42.849732 containerd[1984]: time="2026-03-07T01:11:42.849682221Z" level=info msg="CreateContainer within sandbox \"3bcb0d2fcd1f3acdeb8ce14d995baab8e690527710193556f7c8b5ec74e9f526\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1\"" Mar 7 01:11:42.850866 containerd[1984]: time="2026-03-07T01:11:42.850832147Z" level=info msg="StartContainer for \"6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1\"" Mar 7 01:11:42.893797 systemd[1]: Started cri-containerd-6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1.scope - libcontainer container 6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1. Mar 7 01:11:42.933178 containerd[1984]: time="2026-03-07T01:11:42.933131352Z" level=info msg="StartContainer for \"6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1\" returns successfully" Mar 7 01:11:43.634741 systemd[1]: run-containerd-runc-k8s.io-6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1-runc.mijD5b.mount: Deactivated successfully. Mar 7 01:11:43.765597 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 01:11:46.956673 (udev-worker)[5915]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:46.957347 (udev-worker)[5916]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:46.964341 systemd-networkd[1894]: lxc_health: Link UP Mar 7 01:11:46.998819 systemd-networkd[1894]: lxc_health: Gained carrier Mar 7 01:11:47.243464 systemd[1]: run-containerd-runc-k8s.io-6f114c4a9327bb89fe802473ca6faec8c6261f797c33c26640bf43b3f1ee20d1-runc.Rm9y0h.mount: Deactivated successfully. Mar 7 01:11:48.520769 systemd-networkd[1894]: lxc_health: Gained IPv6LL Mar 7 01:11:48.753055 kubelet[3316]: I0307 01:11:48.752983 3316 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-995rb" podStartSLOduration=10.752962239 podStartE2EDuration="10.752962239s" podCreationTimestamp="2026-03-07 01:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:43.88551232 +0000 UTC m=+101.785370605" watchObservedRunningTime="2026-03-07 01:11:48.752962239 +0000 UTC m=+106.652820502" Mar 7 01:11:50.985317 ntpd[1957]: Listen normally on 15 lxc_health [fe80::c1d:65ff:fe18:5631%14]:123 Mar 7 01:11:50.985782 ntpd[1957]: 7 Mar 01:11:50 ntpd[1957]: Listen normally on 15 lxc_health [fe80::c1d:65ff:fe18:5631%14]:123 Mar 7 01:11:54.012370 sshd[5171]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:54.017359 systemd-logind[1964]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:11:54.018212 systemd[1]: sshd@25-172.31.31.131:22-68.220.241.50:48826.service: Deactivated successfully. Mar 7 01:11:54.020698 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:11:54.022176 systemd-logind[1964]: Removed session 26. Mar 7 01:12:03.223710 containerd[1984]: time="2026-03-07T01:12:03.223275851Z" level=info msg="StopPodSandbox for \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\"" Mar 7 01:12:03.224687 containerd[1984]: time="2026-03-07T01:12:03.223794091Z" level=info msg="TearDown network for sandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" successfully" Mar 7 01:12:03.224687 containerd[1984]: time="2026-03-07T01:12:03.223815946Z" level=info msg="StopPodSandbox for \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" returns successfully" Mar 7 01:12:03.230536 containerd[1984]: time="2026-03-07T01:12:03.230477642Z" level=info msg="RemovePodSandbox for \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\"" Mar 7 01:12:03.259795 containerd[1984]: time="2026-03-07T01:12:03.258368522Z" level=info msg="Forcibly stopping sandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\"" Mar 7 01:12:03.259958 containerd[1984]: time="2026-03-07T01:12:03.259852048Z" level=info msg="TearDown network for sandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" successfully" Mar 7 01:12:03.295473 containerd[1984]: time="2026-03-07T01:12:03.291478343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:12:03.295473 containerd[1984]: time="2026-03-07T01:12:03.292120063Z" level=info msg="RemovePodSandbox \"476555b9c42e03dc3852c76af4e2566cd70e5cef39a9215626ced4d0906548d1\" returns successfully" Mar 7 01:12:03.542278 containerd[1984]: time="2026-03-07T01:12:03.537830977Z" level=info msg="StopPodSandbox for \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\"" Mar 7 01:12:03.548528 containerd[1984]: time="2026-03-07T01:12:03.548306352Z" level=info msg="TearDown network for sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" successfully" Mar 7 01:12:03.548528 containerd[1984]: time="2026-03-07T01:12:03.548530358Z" level=info msg="StopPodSandbox for \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" returns successfully" Mar 7 01:12:03.551384 containerd[1984]: time="2026-03-07T01:12:03.551337589Z" level=info msg="RemovePodSandbox for \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\"" Mar 7 01:12:03.551384 containerd[1984]: time="2026-03-07T01:12:03.551383070Z" level=info msg="Forcibly stopping sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\"" Mar 7 01:12:03.551614 containerd[1984]: time="2026-03-07T01:12:03.551465065Z" level=info msg="TearDown network for sandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" successfully" Mar 7 01:12:03.569516 containerd[1984]: time="2026-03-07T01:12:03.569448537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:12:03.569784 containerd[1984]: time="2026-03-07T01:12:03.569534730Z" level=info msg="RemovePodSandbox \"41646e99923712e9f67a8b5b632b0b37fea0dd58912f6c7e55833a792b02f055\" returns successfully" Mar 7 01:12:36.287132 kubelet[3316]: E0307 01:12:36.286858 3316 controller.go:251] "Failed to update lease" err="Put \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 7 01:12:36.692079 systemd[1]: cri-containerd-d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354.scope: Deactivated successfully. Mar 7 01:12:36.692390 systemd[1]: cri-containerd-d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354.scope: Consumed 2.915s CPU time, 15.4M memory peak, 0B memory swap peak. Mar 7 01:12:36.725742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354-rootfs.mount: Deactivated successfully. Mar 7 01:12:36.738495 containerd[1984]: time="2026-03-07T01:12:36.738407849Z" level=info msg="shim disconnected" id=d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354 namespace=k8s.io Mar 7 01:12:36.739214 containerd[1984]: time="2026-03-07T01:12:36.738617964Z" level=warning msg="cleaning up after shim disconnected" id=d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354 namespace=k8s.io Mar 7 01:12:36.739214 containerd[1984]: time="2026-03-07T01:12:36.738635083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:36.755628 containerd[1984]: time="2026-03-07T01:12:36.755520983Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:12:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:12:37.006929 kubelet[3316]: I0307 01:12:37.006884 3316 scope.go:122] "RemoveContainer" containerID="d4c2e1696f80de9b12cdd6e8d49dda26226c3eeb1adbaf4507a551b22fb2e354" Mar 7 01:12:37.013841 containerd[1984]: time="2026-03-07T01:12:37.013791963Z" level=info msg="CreateContainer within sandbox \"5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:12:37.038099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269102600.mount: Deactivated successfully. Mar 7 01:12:37.043709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389129596.mount: Deactivated successfully. Mar 7 01:12:37.051046 containerd[1984]: time="2026-03-07T01:12:37.050987824Z" level=info msg="CreateContainer within sandbox \"5645e4c2517402f5c8fda22c9adbf2cc3c533d8ff3f9515bb529dc965a6ebf72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5139d434861719db10bdc291985f3539cec44744c3bb3d58ed0a526f38bfe23f\"" Mar 7 01:12:37.051923 containerd[1984]: time="2026-03-07T01:12:37.051886919Z" level=info msg="StartContainer for \"5139d434861719db10bdc291985f3539cec44744c3bb3d58ed0a526f38bfe23f\"" Mar 7 01:12:37.087795 systemd[1]: Started cri-containerd-5139d434861719db10bdc291985f3539cec44744c3bb3d58ed0a526f38bfe23f.scope - libcontainer container 5139d434861719db10bdc291985f3539cec44744c3bb3d58ed0a526f38bfe23f. Mar 7 01:12:37.144529 containerd[1984]: time="2026-03-07T01:12:37.144472396Z" level=info msg="StartContainer for \"5139d434861719db10bdc291985f3539cec44744c3bb3d58ed0a526f38bfe23f\" returns successfully" Mar 7 01:12:41.705143 systemd[1]: cri-containerd-afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9.scope: Deactivated successfully. Mar 7 01:12:41.705464 systemd[1]: cri-containerd-afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9.scope: Consumed 1.696s CPU time, 15.9M memory peak, 0B memory swap peak. Mar 7 01:12:41.737102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9-rootfs.mount: Deactivated successfully. Mar 7 01:12:41.760456 containerd[1984]: time="2026-03-07T01:12:41.760158090Z" level=info msg="shim disconnected" id=afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9 namespace=k8s.io Mar 7 01:12:41.760456 containerd[1984]: time="2026-03-07T01:12:41.760224052Z" level=warning msg="cleaning up after shim disconnected" id=afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9 namespace=k8s.io Mar 7 01:12:41.760456 containerd[1984]: time="2026-03-07T01:12:41.760245610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:42.024732 kubelet[3316]: I0307 01:12:42.024696 3316 scope.go:122] "RemoveContainer" containerID="afffce00bbfc4997b60b48aba3a5203636ee0234cb65db56a63d055effe6f6d9" Mar 7 01:12:42.027316 containerd[1984]: time="2026-03-07T01:12:42.027276800Z" level=info msg="CreateContainer within sandbox \"9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:12:42.052149 containerd[1984]: time="2026-03-07T01:12:42.051948442Z" level=info msg="CreateContainer within sandbox \"9814a7a9d3c8e21a843dff81dfe8d7d52f46807fd0f5ea2343f630272133c84b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"18692ee716827d0cdbb2b327bfddadc3d52db527810e2ba7973442787f21c345\"" Mar 7 01:12:42.052913 containerd[1984]: time="2026-03-07T01:12:42.052876878Z" level=info msg="StartContainer for \"18692ee716827d0cdbb2b327bfddadc3d52db527810e2ba7973442787f21c345\"" Mar 7 01:12:42.095837 systemd[1]: Started cri-containerd-18692ee716827d0cdbb2b327bfddadc3d52db527810e2ba7973442787f21c345.scope - libcontainer container 18692ee716827d0cdbb2b327bfddadc3d52db527810e2ba7973442787f21c345. Mar 7 01:12:42.148970 containerd[1984]: time="2026-03-07T01:12:42.148915013Z" level=info msg="StartContainer for \"18692ee716827d0cdbb2b327bfddadc3d52db527810e2ba7973442787f21c345\" returns successfully" Mar 7 01:12:46.288430 kubelet[3316]: E0307 01:12:46.287654 3316 controller.go:251] "Failed to update lease" err="Put \"https://172.31.31.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-131?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"