Mar 7 01:10:03.935671 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:10:03.935708 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:10:03.935728 kernel: BIOS-provided physical RAM map: Mar 7 01:10:03.935740 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 7 01:10:03.935751 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 7 01:10:03.935762 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 7 01:10:03.935776 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 7 01:10:03.935789 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 7 01:10:03.935800 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 7 01:10:03.935815 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 7 01:10:03.935827 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 7 01:10:03.935839 kernel: NX (Execute Disable) protection: active Mar 7 01:10:03.935851 kernel: APIC: Static calls initialized Mar 7 01:10:03.935863 kernel: efi: EFI v2.7 by EDK II Mar 7 01:10:03.935878 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 7 01:10:03.935895 kernel: SMBIOS 2.7 present. Mar 7 01:10:03.935908 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 7 01:10:03.935922 kernel: Hypervisor detected: KVM Mar 7 01:10:03.935934 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:10:03.935948 kernel: kvm-clock: using sched offset of 5006494773 cycles Mar 7 01:10:03.935961 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:10:03.935973 kernel: tsc: Detected 2499.996 MHz processor Mar 7 01:10:03.937538 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:10:03.937554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:10:03.937568 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 7 01:10:03.937591 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 7 01:10:03.937607 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:10:03.937622 kernel: Using GB pages for direct mapping Mar 7 01:10:03.937638 kernel: Secure boot disabled Mar 7 01:10:03.937652 kernel: ACPI: Early table checksum verification disabled Mar 7 01:10:03.937668 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 7 01:10:03.937683 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 01:10:03.937697 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 01:10:03.937708 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 01:10:03.937725 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 7 01:10:03.937739 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 7 01:10:03.937755 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 01:10:03.937769 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 01:10:03.937782 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 7 01:10:03.937794 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 7 01:10:03.937813 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:10:03.937832 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 7 01:10:03.937846 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 7 01:10:03.937859 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 7 01:10:03.937873 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 7 01:10:03.937886 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 7 01:10:03.937899 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 7 01:10:03.937917 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 7 01:10:03.937931 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 7 01:10:03.937946 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 7 01:10:03.937960 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 7 01:10:03.938088 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 7 01:10:03.938104 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 7 01:10:03.938119 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 7 01:10:03.938133 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 7 01:10:03.938148 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 7 01:10:03.938163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 7 01:10:03.938183 kernel: NUMA: Initialized distance table, cnt=1 Mar 7 01:10:03.938197 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 7 01:10:03.938212 kernel: Zone ranges: Mar 7 01:10:03.938227 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:10:03.938241 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 7 01:10:03.938255 kernel: Normal empty Mar 7 01:10:03.938270 kernel: Movable zone start for each node Mar 7 01:10:03.938285 kernel: Early memory node ranges Mar 7 01:10:03.938299 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 7 01:10:03.938317 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 7 01:10:03.938331 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 7 01:10:03.938346 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 7 01:10:03.938360 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:10:03.938375 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 7 01:10:03.938390 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 7 01:10:03.938405 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 7 01:10:03.938420 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 7 01:10:03.938434 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:10:03.938451 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 7 01:10:03.938464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:10:03.938476 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:10:03.938489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:10:03.938502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:10:03.938516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:10:03.938529 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:10:03.938543 kernel: TSC deadline timer available Mar 7 01:10:03.938556 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 7 01:10:03.938570 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:10:03.938587 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 7 01:10:03.938600 kernel: Booting paravirtualized kernel on KVM Mar 7 01:10:03.938614 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:10:03.938628 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 7 01:10:03.938642 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 7 01:10:03.938655 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 7 01:10:03.938668 kernel: pcpu-alloc: [0] 0 1 Mar 7 01:10:03.938681 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:10:03.938695 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:10:03.938713 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:10:03.938727 kernel: random: crng init done Mar 7 01:10:03.938741 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:10:03.938754 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 7 01:10:03.938768 kernel: Fallback order for Node 0: 0 Mar 7 01:10:03.938782 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 7 01:10:03.938795 kernel: Policy zone: DMA32 Mar 7 01:10:03.938809 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:10:03.938825 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 7 01:10:03.938839 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 01:10:03.938853 kernel: Kernel/User page tables isolation: enabled Mar 7 01:10:03.938866 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:10:03.938880 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:10:03.938894 kernel: Dynamic Preempt: voluntary Mar 7 01:10:03.938907 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:10:03.938925 kernel: rcu: RCU event tracing is enabled. Mar 7 01:10:03.938939 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 01:10:03.938956 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:10:03.938970 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:10:03.941027 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:10:03.941053 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:10:03.941066 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 01:10:03.941080 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 7 01:10:03.941094 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:10:03.941126 kernel: Console: colour dummy device 80x25 Mar 7 01:10:03.941141 kernel: printk: console [tty0] enabled Mar 7 01:10:03.941158 kernel: printk: console [ttyS0] enabled Mar 7 01:10:03.941173 kernel: ACPI: Core revision 20230628 Mar 7 01:10:03.941191 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 7 01:10:03.941207 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:10:03.941223 kernel: x2apic enabled Mar 7 01:10:03.941240 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:10:03.941257 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 7 01:10:03.941277 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Mar 7 01:10:03.941294 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 7 01:10:03.941311 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 7 01:10:03.941325 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:10:03.941339 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:10:03.941353 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:10:03.941367 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 7 01:10:03.941383 kernel: RETBleed: Vulnerable Mar 7 01:10:03.941399 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:10:03.941416 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:10:03.941436 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:10:03.941452 kernel: GDS: Unknown: Dependent on hypervisor status Mar 7 01:10:03.941468 kernel: active return thunk: its_return_thunk Mar 7 01:10:03.941482 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 7 01:10:03.941497 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:10:03.941513 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:10:03.941528 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:10:03.941540 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 7 01:10:03.941557 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 7 01:10:03.941571 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 7 01:10:03.941587 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 7 01:10:03.941606 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 7 01:10:03.941622 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 7 01:10:03.941637 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:10:03.941652 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 7 01:10:03.941667 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 7 01:10:03.941683 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 7 01:10:03.941697 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 7 01:10:03.941714 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 7 01:10:03.941730 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 7 01:10:03.941748 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 7 01:10:03.941764 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:10:03.941781 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:10:03.941802 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:10:03.941819 kernel: landlock: Up and running. Mar 7 01:10:03.941836 kernel: SELinux: Initializing. Mar 7 01:10:03.941852 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:10:03.941870 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 7 01:10:03.941887 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Mar 7 01:10:03.941905 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:10:03.941922 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:10:03.941940 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 01:10:03.941956 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 7 01:10:03.941998 kernel: signal: max sigframe size: 3632 Mar 7 01:10:03.942011 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:10:03.942025 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:10:03.942038 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:10:03.942052 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:10:03.942065 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:10:03.942081 kernel: .... node #0, CPUs: #1 Mar 7 01:10:03.942098 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 7 01:10:03.942115 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 7 01:10:03.942134 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 01:10:03.942151 kernel: smpboot: Max logical packages: 1 Mar 7 01:10:03.942166 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Mar 7 01:10:03.942182 kernel: devtmpfs: initialized Mar 7 01:10:03.942198 kernel: x86/mm: Memory block size: 128MB Mar 7 01:10:03.942214 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 7 01:10:03.942230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:10:03.942246 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 01:10:03.942265 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:10:03.942281 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:10:03.942296 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:10:03.942312 kernel: audit: type=2000 audit(1772845804.248:1): state=initialized audit_enabled=0 res=1 Mar 7 01:10:03.942327 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:10:03.942343 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:10:03.942359 kernel: cpuidle: using governor menu Mar 7 01:10:03.942375 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:10:03.942391 kernel: dca service started, version 1.12.1 Mar 7 01:10:03.942409 kernel: PCI: Using configuration type 1 for base access Mar 7 01:10:03.942425 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:10:03.942441 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:10:03.942457 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:10:03.942473 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:10:03.942488 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:10:03.942503 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:10:03.942519 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:10:03.942535 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:10:03.942553 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 7 01:10:03.942569 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:10:03.942585 kernel: ACPI: Interpreter enabled Mar 7 01:10:03.942600 kernel: ACPI: PM: (supports S0 S5) Mar 7 01:10:03.942617 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:10:03.942632 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:10:03.942646 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:10:03.942661 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 7 01:10:03.942676 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:10:03.942915 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:10:03.944532 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 7 01:10:03.944693 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 7 01:10:03.944715 kernel: acpiphp: Slot [3] registered Mar 7 01:10:03.944731 kernel: acpiphp: Slot [4] registered Mar 7 01:10:03.944748 kernel: acpiphp: Slot [5] registered Mar 7 01:10:03.944764 kernel: acpiphp: Slot [6] registered Mar 7 01:10:03.944779 kernel: acpiphp: Slot [7] registered Mar 7 01:10:03.944800 kernel: acpiphp: Slot [8] registered Mar 7 01:10:03.944815 kernel: acpiphp: Slot [9] registered Mar 7 01:10:03.944831 kernel: acpiphp: Slot [10] registered Mar 7 01:10:03.944847 kernel: acpiphp: Slot [11] registered Mar 7 01:10:03.944863 kernel: acpiphp: Slot [12] registered Mar 7 01:10:03.944878 kernel: acpiphp: Slot [13] registered Mar 7 01:10:03.944894 kernel: acpiphp: Slot [14] registered Mar 7 01:10:03.944910 kernel: acpiphp: Slot [15] registered Mar 7 01:10:03.944925 kernel: acpiphp: Slot [16] registered Mar 7 01:10:03.944943 kernel: acpiphp: Slot [17] registered Mar 7 01:10:03.944955 kernel: acpiphp: Slot [18] registered Mar 7 01:10:03.944968 kernel: acpiphp: Slot [19] registered Mar 7 01:10:03.944999 kernel: acpiphp: Slot [20] registered Mar 7 01:10:03.945014 kernel: acpiphp: Slot [21] registered Mar 7 01:10:03.945029 kernel: acpiphp: Slot [22] registered Mar 7 01:10:03.945043 kernel: acpiphp: Slot [23] registered Mar 7 01:10:03.945057 kernel: acpiphp: Slot [24] registered Mar 7 01:10:03.945071 kernel: acpiphp: Slot [25] registered Mar 7 01:10:03.945086 kernel: acpiphp: Slot [26] registered Mar 7 01:10:03.945103 kernel: acpiphp: Slot [27] registered Mar 7 01:10:03.945117 kernel: acpiphp: Slot [28] registered Mar 7 01:10:03.945131 kernel: acpiphp: Slot [29] registered Mar 7 01:10:03.945146 kernel: acpiphp: Slot [30] registered Mar 7 01:10:03.945160 kernel: acpiphp: Slot [31] registered Mar 7 01:10:03.945175 kernel: PCI host bridge to bus 0000:00 Mar 7 01:10:03.945331 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:10:03.945453 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:10:03.945575 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:10:03.945695 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 7 01:10:03.945812 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 7 01:10:03.945931 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:10:03.947165 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 7 01:10:03.947366 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 7 01:10:03.947536 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 7 01:10:03.947682 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 7 01:10:03.947836 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 7 01:10:03.947964 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 7 01:10:03.948120 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 7 01:10:03.948250 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 7 01:10:03.948383 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 7 01:10:03.948521 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 7 01:10:03.948665 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 7 01:10:03.948801 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 7 01:10:03.948934 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 7 01:10:03.951204 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 7 01:10:03.951371 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:10:03.951527 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 01:10:03.951676 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 7 01:10:03.951823 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 01:10:03.951958 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 7 01:10:03.951995 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:10:03.952011 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:10:03.952026 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:10:03.952040 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:10:03.952054 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 7 01:10:03.952073 kernel: iommu: Default domain type: Translated Mar 7 01:10:03.952087 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:10:03.952102 kernel: efivars: Registered efivars operations Mar 7 01:10:03.952117 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:10:03.952131 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:10:03.952146 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 7 01:10:03.952159 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 7 01:10:03.952290 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 7 01:10:03.952422 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 7 01:10:03.952559 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:10:03.952577 kernel: vgaarb: loaded Mar 7 01:10:03.952592 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 7 01:10:03.952606 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 7 01:10:03.952620 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:10:03.952634 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:10:03.952648 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:10:03.952661 kernel: pnp: PnP ACPI init Mar 7 01:10:03.952679 kernel: pnp: PnP ACPI: found 5 devices Mar 7 01:10:03.952693 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:10:03.952708 kernel: NET: Registered PF_INET protocol family Mar 7 01:10:03.952722 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:10:03.952736 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 7 01:10:03.952750 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:10:03.952765 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 7 01:10:03.952779 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 7 01:10:03.952794 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 7 01:10:03.952812 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:10:03.952825 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 7 01:10:03.952839 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:10:03.952853 kernel: NET: Registered PF_XDP protocol family Mar 7 01:10:03.957012 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:10:03.957216 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:10:03.957353 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:10:03.957497 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 7 01:10:03.957643 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 7 01:10:03.957816 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 7 01:10:03.957839 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:10:03.957855 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 7 01:10:03.957873 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Mar 7 01:10:03.957890 kernel: clocksource: Switched to clocksource tsc Mar 7 01:10:03.957906 kernel: Initialise system trusted keyrings Mar 7 01:10:03.957922 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 7 01:10:03.957937 kernel: Key type asymmetric registered Mar 7 01:10:03.957957 kernel: Asymmetric key parser 'x509' registered Mar 7 01:10:03.957972 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:10:03.958009 kernel: io scheduler mq-deadline registered Mar 7 01:10:03.958023 kernel: io scheduler kyber registered Mar 7 01:10:03.958036 kernel: io scheduler bfq registered Mar 7 01:10:03.958050 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:10:03.958066 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:10:03.958084 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:10:03.958101 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:10:03.958123 kernel: i8042: Warning: Keylock active Mar 7 01:10:03.958140 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:10:03.958158 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:10:03.958326 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 7 01:10:03.958451 kernel: rtc_cmos 00:00: registered as rtc0 Mar 7 01:10:03.958573 kernel: rtc_cmos 00:00: setting system clock to 2026-03-07T01:10:03 UTC (1772845803) Mar 7 01:10:03.958696 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 7 01:10:03.958716 kernel: intel_pstate: CPU model not supported Mar 7 01:10:03.958736 kernel: efifb: probing for efifb Mar 7 01:10:03.958751 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 7 01:10:03.958767 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 7 01:10:03.958782 kernel: efifb: scrolling: redraw Mar 7 01:10:03.958797 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 7 01:10:03.958813 kernel: Console: switching to colour frame buffer device 100x37 Mar 7 01:10:03.958829 kernel: fb0: EFI VGA frame buffer device Mar 7 01:10:03.958845 kernel: pstore: Using crash dump compression: deflate Mar 7 01:10:03.958860 kernel: pstore: Registered efi_pstore as persistent store backend Mar 7 01:10:03.958877 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:10:03.958892 kernel: Segment Routing with IPv6 Mar 7 01:10:03.958908 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:10:03.958923 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:10:03.958939 kernel: Key type dns_resolver registered Mar 7 01:10:03.958954 kernel: IPI shorthand broadcast: enabled Mar 7 01:10:03.963425 kernel: sched_clock: Marking stable (482002840, 129619732)->(679688377, -68065805) Mar 7 01:10:03.963456 kernel: registered taskstats version 1 Mar 7 01:10:03.963471 kernel: Loading compiled-in X.509 certificates Mar 7 01:10:03.963490 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:10:03.963505 kernel: Key type .fscrypt registered Mar 7 01:10:03.963519 kernel: Key type fscrypt-provisioning registered Mar 7 01:10:03.963534 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:10:03.963548 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:10:03.963563 kernel: ima: No architecture policies found Mar 7 01:10:03.963577 kernel: clk: Disabling unused clocks Mar 7 01:10:03.963592 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:10:03.963607 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:10:03.963625 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:10:03.963640 kernel: Run /init as init process Mar 7 01:10:03.963654 kernel: with arguments: Mar 7 01:10:03.963670 kernel: /init Mar 7 01:10:03.963687 kernel: with environment: Mar 7 01:10:03.963702 kernel: HOME=/ Mar 7 01:10:03.963715 kernel: TERM=linux Mar 7 01:10:03.963735 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:10:03.963757 systemd[1]: Detected virtualization amazon. Mar 7 01:10:03.963772 systemd[1]: Detected architecture x86-64. Mar 7 01:10:03.963787 systemd[1]: Running in initrd. Mar 7 01:10:03.963802 systemd[1]: No hostname configured, using default hostname. Mar 7 01:10:03.963816 systemd[1]: Hostname set to . Mar 7 01:10:03.963832 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:10:03.963848 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:10:03.963863 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:03.963882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:03.963899 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:10:03.963915 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:10:03.963931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:10:03.963951 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:10:03.963973 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:10:03.964003 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:10:03.964018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:03.964034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:03.964049 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:10:03.964064 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:10:03.964080 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:10:03.964099 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:10:03.964114 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:10:03.964130 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:10:03.964146 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:10:03.964161 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:10:03.964176 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:03.964191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:03.964207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:03.964223 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:10:03.964241 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:10:03.964256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:10:03.964271 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:10:03.964287 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:10:03.964302 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:10:03.964317 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:10:03.964378 systemd-journald[179]: Collecting audit messages is disabled. Mar 7 01:10:03.964417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:03.964433 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:10:03.964448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:03.964464 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:10:03.964484 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:10:03.964500 systemd-journald[179]: Journal started Mar 7 01:10:03.964533 systemd-journald[179]: Runtime Journal (/run/log/journal/ec28beceb453a05e71dc51e832d096ad) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:10:03.967123 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:10:03.978221 systemd-modules-load[180]: Inserted module 'overlay' Mar 7 01:10:03.986218 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:10:03.989312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:03.990910 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:10:03.999620 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:10:04.006015 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 01:10:04.012195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:10:04.022917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:04.033172 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:10:04.033211 kernel: Bridge firewalling registered Mar 7 01:10:04.027112 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 7 01:10:04.041320 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:04.043347 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:04.053229 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:10:04.057195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:04.058756 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:04.069791 dracut-cmdline[209]: dracut-dracut-053 Mar 7 01:10:04.074493 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:10:04.080597 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:04.092219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:10:04.131748 systemd-resolved[232]: Positive Trust Anchors: Mar 7 01:10:04.132700 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:10:04.132768 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:10:04.141848 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 7 01:10:04.144147 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:10:04.145621 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:04.174017 kernel: SCSI subsystem initialized Mar 7 01:10:04.184033 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:10:04.196015 kernel: iscsi: registered transport (tcp) Mar 7 01:10:04.218300 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:10:04.218386 kernel: QLogic iSCSI HBA Driver Mar 7 01:10:04.257621 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:10:04.263272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:10:04.290481 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:10:04.290560 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:10:04.290583 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:10:04.334013 kernel: raid6: avx512x4 gen() 17860 MB/s Mar 7 01:10:04.352001 kernel: raid6: avx512x2 gen() 17749 MB/s Mar 7 01:10:04.370006 kernel: raid6: avx512x1 gen() 17805 MB/s Mar 7 01:10:04.388000 kernel: raid6: avx2x4 gen() 17718 MB/s Mar 7 01:10:04.406002 kernel: raid6: avx2x2 gen() 17709 MB/s Mar 7 01:10:04.424311 kernel: raid6: avx2x1 gen() 13645 MB/s Mar 7 01:10:04.424370 kernel: raid6: using algorithm avx512x4 gen() 17860 MB/s Mar 7 01:10:04.443269 kernel: raid6: .... xor() 7781 MB/s, rmw enabled Mar 7 01:10:04.443333 kernel: raid6: using avx512x2 recovery algorithm Mar 7 01:10:04.465012 kernel: xor: automatically using best checksumming function avx Mar 7 01:10:04.625010 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:10:04.635704 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:10:04.644200 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:04.657173 systemd-udevd[398]: Using default interface naming scheme 'v255'. Mar 7 01:10:04.662353 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:04.670332 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:10:04.691397 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 7 01:10:04.722377 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:10:04.727209 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:10:04.778827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:04.787220 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:10:04.808054 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:10:04.813492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:10:04.815257 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:04.815766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:10:04.824617 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:10:04.847931 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:10:04.881003 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:10:04.896466 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 01:10:04.896757 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 01:10:04.907000 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 7 01:10:04.909378 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:10:04.910407 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:04.914480 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:10:04.915091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:10:04.915387 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:04.917084 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:04.929019 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:10:04.929086 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:6a:88:72:8f:cf Mar 7 01:10:04.929388 kernel: AES CTR mode by8 optimization enabled Mar 7 01:10:04.928410 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:04.936930 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:04.939328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:10:04.940530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:04.952197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:04.961160 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 01:10:04.961433 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 7 01:10:04.977147 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 01:10:04.982110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:04.990023 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:10:04.990099 kernel: GPT:9289727 != 33554431 Mar 7 01:10:04.990119 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:10:04.990137 kernel: GPT:9289727 != 33554431 Mar 7 01:10:04.991267 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:10:04.991316 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:10:04.994289 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:10:05.013897 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:05.067671 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 01:10:05.167011 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (459) Mar 7 01:10:05.183383 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 01:10:05.185763 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (466) Mar 7 01:10:05.242071 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:10:05.248028 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 01:10:05.248597 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 01:10:05.265237 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:10:05.272470 disk-uuid[633]: Primary Header is updated. Mar 7 01:10:05.272470 disk-uuid[633]: Secondary Entries is updated. Mar 7 01:10:05.272470 disk-uuid[633]: Secondary Header is updated. Mar 7 01:10:05.280025 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:10:05.287633 kernel: GPT:disk_guids don't match. Mar 7 01:10:05.287707 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:10:05.287722 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:10:05.296047 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:10:06.295089 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 01:10:06.296420 disk-uuid[634]: The operation has completed successfully. Mar 7 01:10:06.431697 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:10:06.431843 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:10:06.453183 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:10:06.458221 sh[975]: Success Mar 7 01:10:06.479994 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 7 01:10:06.574714 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:10:06.584127 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:10:06.585782 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:10:06.620276 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:10:06.620352 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:06.620375 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:10:06.623690 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:10:06.623770 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:10:06.662029 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 01:10:06.675411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:10:06.676659 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:10:06.684209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:10:06.686171 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:10:06.713798 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:06.713878 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:06.713910 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:10:06.731421 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:10:06.745653 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:06.745209 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:10:06.754175 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:10:06.763227 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:10:06.791094 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:10:06.796205 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:10:06.819798 systemd-networkd[1167]: lo: Link UP Mar 7 01:10:06.819810 systemd-networkd[1167]: lo: Gained carrier Mar 7 01:10:06.821533 systemd-networkd[1167]: Enumeration completed Mar 7 01:10:06.822032 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:06.822037 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:10:06.823676 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:10:06.825424 systemd[1]: Reached target network.target - Network. Mar 7 01:10:06.826684 systemd-networkd[1167]: eth0: Link UP Mar 7 01:10:06.826688 systemd-networkd[1167]: eth0: Gained carrier Mar 7 01:10:06.826700 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:06.838086 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.24.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:10:07.092609 ignition[1125]: Ignition 2.19.0 Mar 7 01:10:07.092623 ignition[1125]: Stage: fetch-offline Mar 7 01:10:07.092900 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:07.094707 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:10:07.092912 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:07.093274 ignition[1125]: Ignition finished successfully Mar 7 01:10:07.105247 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 01:10:07.120813 ignition[1175]: Ignition 2.19.0 Mar 7 01:10:07.120827 ignition[1175]: Stage: fetch Mar 7 01:10:07.121339 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:07.121354 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:07.121480 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:07.137367 ignition[1175]: PUT result: OK Mar 7 01:10:07.139427 ignition[1175]: parsed url from cmdline: "" Mar 7 01:10:07.139438 ignition[1175]: no config URL provided Mar 7 01:10:07.139449 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:10:07.139478 ignition[1175]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:10:07.139518 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:07.140484 ignition[1175]: PUT result: OK Mar 7 01:10:07.140552 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 01:10:07.141180 ignition[1175]: GET result: OK Mar 7 01:10:07.141261 ignition[1175]: parsing config with SHA512: b22bbcd65e097a42c24c8ca0a2ec0dca946764f8666fd25625d2befa1b884d9470185ea6338c8e2ad97dc32b87eb04f7c3a2ec9feaccf26601dbf6c7b239630b Mar 7 01:10:07.146207 unknown[1175]: fetched base config from "system" Mar 7 01:10:07.146657 ignition[1175]: fetch: fetch complete Mar 7 01:10:07.146222 unknown[1175]: fetched base config from "system" Mar 7 01:10:07.146664 ignition[1175]: fetch: fetch passed Mar 7 01:10:07.146228 unknown[1175]: fetched user config from "aws" Mar 7 01:10:07.146710 ignition[1175]: Ignition finished successfully Mar 7 01:10:07.150744 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 01:10:07.156276 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:10:07.171487 ignition[1181]: Ignition 2.19.0 Mar 7 01:10:07.171502 ignition[1181]: Stage: kargs Mar 7 01:10:07.172017 ignition[1181]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:07.172031 ignition[1181]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:07.172156 ignition[1181]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:07.173018 ignition[1181]: PUT result: OK Mar 7 01:10:07.175440 ignition[1181]: kargs: kargs passed Mar 7 01:10:07.175511 ignition[1181]: Ignition finished successfully Mar 7 01:10:07.177278 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:10:07.183226 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:10:07.197152 ignition[1187]: Ignition 2.19.0 Mar 7 01:10:07.197165 ignition[1187]: Stage: disks Mar 7 01:10:07.197630 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:07.197644 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:07.197761 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:07.198560 ignition[1187]: PUT result: OK Mar 7 01:10:07.201008 ignition[1187]: disks: disks passed Mar 7 01:10:07.201077 ignition[1187]: Ignition finished successfully Mar 7 01:10:07.202801 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:10:07.203524 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:10:07.203876 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:10:07.204435 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:10:07.204966 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:10:07.205529 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:10:07.210198 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:10:07.246430 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:10:07.250292 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:10:07.256080 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:10:07.359143 kernel: EXT4-fs (nvme0n1p9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:10:07.358677 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:10:07.359881 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:10:07.379259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:10:07.383142 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:10:07.385029 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:10:07.385101 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:10:07.385136 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:10:07.397372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:10:07.403086 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1214) Mar 7 01:10:07.408025 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:07.408089 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:07.408111 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:10:07.406107 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:10:07.425038 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:10:07.427593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:10:07.794352 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:10:07.813645 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:10:07.818896 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:10:07.824085 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:10:08.045791 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:10:08.054088 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:10:08.057157 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:10:08.065910 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:10:08.068012 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:08.101805 ignition[1326]: INFO : Ignition 2.19.0 Mar 7 01:10:08.101805 ignition[1326]: INFO : Stage: mount Mar 7 01:10:08.103458 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:08.103458 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:08.103458 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:08.105510 ignition[1326]: INFO : PUT result: OK Mar 7 01:10:08.108795 ignition[1326]: INFO : mount: mount passed Mar 7 01:10:08.108795 ignition[1326]: INFO : Ignition finished successfully Mar 7 01:10:08.109971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:10:08.112477 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:10:08.118126 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:10:08.130195 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:10:08.147286 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1338) Mar 7 01:10:08.147351 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:08.151150 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:08.151220 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 01:10:08.158040 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 01:10:08.159730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:10:08.187321 ignition[1355]: INFO : Ignition 2.19.0 Mar 7 01:10:08.187321 ignition[1355]: INFO : Stage: files Mar 7 01:10:08.188668 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:08.188668 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:08.188668 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:08.189872 ignition[1355]: INFO : PUT result: OK Mar 7 01:10:08.191502 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:10:08.192439 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:10:08.192439 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:10:08.205534 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:10:08.206531 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:10:08.206531 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:10:08.206062 unknown[1355]: wrote ssh authorized keys file for user: core Mar 7 01:10:08.219374 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:10:08.220427 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:10:08.289580 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:10:08.559851 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:10:08.560885 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:10:08.560885 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:10:08.818466 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:10:08.864211 systemd-networkd[1167]: eth0: Gained IPv6LL Mar 7 01:10:08.980089 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:10:08.983770 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:10:08.990059 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:10:08.990059 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:10:08.990059 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:10:08.990059 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:10:08.990059 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:10:08.990059 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 7 01:10:09.663965 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:10:11.425235 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 7 01:10:11.425235 ignition[1355]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:10:11.428481 ignition[1355]: INFO : files: files passed Mar 7 01:10:11.428481 ignition[1355]: INFO : Ignition finished successfully Mar 7 01:10:11.431873 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:10:11.440212 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:10:11.443899 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:10:11.446644 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:10:11.447497 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:10:11.465682 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:11.465682 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:11.468855 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:11.470280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:10:11.470942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:10:11.476222 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:10:11.508168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:10:11.508301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:10:11.509517 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:10:11.510631 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:10:11.511518 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:10:11.517239 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:10:11.530808 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:10:11.536173 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:10:11.556745 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:11.557445 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:11.558455 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:10:11.559373 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:10:11.559548 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:10:11.560694 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:10:11.561538 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:10:11.562331 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:10:11.563244 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:10:11.563944 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:10:11.564716 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:10:11.565475 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:10:11.566262 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:10:11.567506 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:10:11.568260 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:10:11.568958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:10:11.569156 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:10:11.570231 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:11.571077 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:11.571775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:10:11.572508 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:11.573069 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:10:11.573240 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:10:11.574684 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:10:11.574865 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:10:11.575663 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:10:11.575813 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:10:11.584266 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:10:11.584959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:10:11.585175 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:11.590291 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:10:11.591545 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:10:11.592266 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:11.594495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:10:11.594674 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:10:11.602234 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:10:11.602365 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:10:11.609172 ignition[1408]: INFO : Ignition 2.19.0 Mar 7 01:10:11.610797 ignition[1408]: INFO : Stage: umount Mar 7 01:10:11.610797 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:11.610797 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 01:10:11.610797 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 01:10:11.613685 ignition[1408]: INFO : PUT result: OK Mar 7 01:10:11.616608 ignition[1408]: INFO : umount: umount passed Mar 7 01:10:11.618234 ignition[1408]: INFO : Ignition finished successfully Mar 7 01:10:11.618877 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:10:11.619189 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:10:11.621399 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:10:11.621522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:10:11.623187 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:10:11.623251 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:10:11.623779 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 01:10:11.623830 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 01:10:11.624464 systemd[1]: Stopped target network.target - Network. Mar 7 01:10:11.625882 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:10:11.625945 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:10:11.626962 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:10:11.627924 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:10:11.631105 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:11.631582 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:10:11.632028 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:10:11.632506 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:10:11.632563 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:10:11.634096 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:10:11.634153 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:10:11.634649 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:10:11.634719 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:10:11.635375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:10:11.635434 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:10:11.636160 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:10:11.636793 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:10:11.642461 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:10:11.643175 systemd-networkd[1167]: eth0: DHCPv6 lease lost Mar 7 01:10:11.643424 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:10:11.643546 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:10:11.645712 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:10:11.645825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:10:11.648906 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:10:11.649104 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:10:11.651600 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:10:11.651661 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:11.652183 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:10:11.652249 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:10:11.661236 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:10:11.661887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:10:11.661992 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:10:11.662679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:10:11.662744 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:11.663519 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:10:11.663582 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:11.664315 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:10:11.664395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:11.665190 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:11.684329 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:10:11.684573 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:11.686372 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:10:11.686447 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:11.687254 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:10:11.687304 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:11.688664 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:10:11.688731 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:10:11.689833 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:10:11.689898 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:10:11.691179 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:10:11.691244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:11.697258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:10:11.698437 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:10:11.698530 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:11.701084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:10:11.701163 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:11.702348 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:10:11.702488 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:10:11.708346 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:10:11.708503 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:10:11.709490 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:10:11.713152 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:10:11.723884 systemd[1]: Switching root. Mar 7 01:10:11.763232 systemd-journald[179]: Journal stopped Mar 7 01:10:13.529129 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 7 01:10:13.529224 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:10:13.529247 kernel: SELinux: policy capability open_perms=1 Mar 7 01:10:13.529264 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:10:13.529290 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:10:13.529311 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:10:13.529337 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:10:13.529357 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:10:13.529375 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:10:13.529402 kernel: audit: type=1403 audit(1772845812.308:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:10:13.529424 systemd[1]: Successfully loaded SELinux policy in 48.880ms. Mar 7 01:10:13.529468 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.620ms. Mar 7 01:10:13.529495 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:10:13.529520 systemd[1]: Detected virtualization amazon. Mar 7 01:10:13.529542 systemd[1]: Detected architecture x86-64. Mar 7 01:10:13.529564 systemd[1]: Detected first boot. Mar 7 01:10:13.529587 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:10:13.529608 zram_generator::config[1450]: No configuration found. Mar 7 01:10:13.529634 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:10:13.529655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:10:13.529677 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:10:13.529699 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:10:13.529723 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:10:13.529746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:10:13.529767 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:10:13.529788 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:10:13.529813 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:10:13.529835 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:10:13.529856 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:10:13.529878 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:10:13.529900 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:13.529922 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:13.529944 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:10:13.529966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:10:13.530024 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:10:13.530046 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:10:13.530066 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:10:13.530084 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:13.530106 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:10:13.530125 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:10:13.530145 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:10:13.530166 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:10:13.530195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:13.530214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:10:13.530234 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:10:13.530254 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:10:13.530273 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:10:13.530292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:10:13.530311 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:13.530330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:13.530350 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:13.530369 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:10:13.530392 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:10:13.530411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:10:13.530431 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:10:13.530451 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:13.530471 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:10:13.530489 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:10:13.530508 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:10:13.530534 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:10:13.530557 systemd[1]: Reached target machines.target - Containers. Mar 7 01:10:13.530582 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:10:13.530602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:13.530621 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:10:13.530640 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:10:13.530660 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:10:13.530678 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:10:13.530697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:10:13.530720 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:10:13.530739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:10:13.530758 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:10:13.530778 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:10:13.530797 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:10:13.530815 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:10:13.530835 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:10:13.530854 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:10:13.530872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:10:13.530894 kernel: loop: module loaded Mar 7 01:10:13.530914 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:10:13.530934 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:10:13.530954 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:10:13.531001 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:10:13.531020 kernel: ACPI: bus type drm_connector registered Mar 7 01:10:13.531038 systemd[1]: Stopped verity-setup.service. Mar 7 01:10:13.531059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:13.531078 kernel: fuse: init (API version 7.39) Mar 7 01:10:13.531099 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:10:13.531118 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:10:13.531136 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:10:13.531153 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:10:13.531173 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:10:13.531196 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:10:13.531216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:13.531235 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:10:13.531253 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:10:13.531271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:10:13.531322 systemd-journald[1535]: Collecting audit messages is disabled. Mar 7 01:10:13.531362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:10:13.531387 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:10:13.531409 systemd-journald[1535]: Journal started Mar 7 01:10:13.531443 systemd-journald[1535]: Runtime Journal (/run/log/journal/ec28beceb453a05e71dc51e832d096ad) is 4.7M, max 38.2M, 33.4M free. Mar 7 01:10:13.130002 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:10:13.179499 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 01:10:13.179926 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:10:13.534032 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:10:13.539443 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:10:13.539216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:10:13.539642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:10:13.541678 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:10:13.541874 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:10:13.543271 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:10:13.543930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:10:13.546402 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:13.547541 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:10:13.548600 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:10:13.576291 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:10:13.588089 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:10:13.597095 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:10:13.598673 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:10:13.598729 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:10:13.603778 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:10:13.621214 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:10:13.628132 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:10:13.628775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:13.637263 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:10:13.641111 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:10:13.641604 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:10:13.650235 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:10:13.651767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:10:13.655216 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:13.664291 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:10:13.668833 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:10:13.669823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:13.670630 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:10:13.671518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:10:13.672783 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:10:13.682035 systemd-journald[1535]: Time spent on flushing to /var/log/journal/ec28beceb453a05e71dc51e832d096ad is 76.492ms for 988 entries. Mar 7 01:10:13.682035 systemd-journald[1535]: System Journal (/var/log/journal/ec28beceb453a05e71dc51e832d096ad) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:10:13.778304 systemd-journald[1535]: Received client request to flush runtime journal. Mar 7 01:10:13.778382 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 01:10:13.680211 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:10:13.688303 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:10:13.699224 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:10:13.703416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:10:13.708266 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:10:13.729414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:10:13.731372 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:10:13.786529 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:13.790234 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:10:13.795240 udevadm[1588]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:10:13.865480 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:10:13.876614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:10:13.902205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:10:13.934278 kernel: loop1: detected capacity change from 0 to 61336 Mar 7 01:10:13.940665 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Mar 7 01:10:13.941142 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Mar 7 01:10:13.950443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:14.052021 kernel: loop2: detected capacity change from 0 to 228704 Mar 7 01:10:14.194011 kernel: loop3: detected capacity change from 0 to 142488 Mar 7 01:10:14.308000 kernel: loop4: detected capacity change from 0 to 140768 Mar 7 01:10:14.340009 kernel: loop5: detected capacity change from 0 to 61336 Mar 7 01:10:14.362024 kernel: loop6: detected capacity change from 0 to 228704 Mar 7 01:10:14.391044 kernel: loop7: detected capacity change from 0 to 142488 Mar 7 01:10:14.414531 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 01:10:14.415653 (sd-merge)[1605]: Merged extensions into '/usr'. Mar 7 01:10:14.429518 systemd[1]: Reloading requested from client PID 1578 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:10:14.429662 systemd[1]: Reloading... Mar 7 01:10:14.538037 zram_generator::config[1637]: No configuration found. Mar 7 01:10:14.720499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:14.786360 systemd[1]: Reloading finished in 356 ms. Mar 7 01:10:14.816726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:10:14.817510 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:10:14.825175 systemd[1]: Starting ensure-sysext.service... Mar 7 01:10:14.828193 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:10:14.837211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:14.872096 systemd[1]: Reloading requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:10:14.872285 systemd[1]: Reloading... Mar 7 01:10:14.876585 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Mar 7 01:10:14.937097 zram_generator::config[1712]: No configuration found. Mar 7 01:10:14.935224 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:10:14.935819 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:10:14.941368 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:10:14.942682 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Mar 7 01:10:14.943247 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Mar 7 01:10:14.967581 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:10:14.967595 systemd-tmpfiles[1684]: Skipping /boot Mar 7 01:10:14.990929 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:10:14.990945 systemd-tmpfiles[1684]: Skipping /boot Mar 7 01:10:15.133172 (udev-worker)[1754]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:15.215009 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 7 01:10:15.231997 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 01:10:15.240019 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:10:15.253367 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Mar 7 01:10:15.251132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:15.268272 kernel: ACPI: button: Sleep Button [SLPF] Mar 7 01:10:15.275008 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Mar 7 01:10:15.410040 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1741) Mar 7 01:10:15.433784 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:10:15.433743 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:10:15.434507 systemd[1]: Reloading finished in 561 ms. Mar 7 01:10:15.454289 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:15.455969 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:15.484004 ldconfig[1573]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:10:15.489588 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:10:15.530442 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:10:15.533892 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:10:15.538393 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:10:15.549089 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:10:15.555200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:10:15.561397 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:10:15.571091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:10:15.602199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:15.602514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:15.610628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:10:15.620329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:10:15.629594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:10:15.631166 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:15.640228 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:10:15.641479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:15.650376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:15.651160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:15.651379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:15.651506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:15.660911 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:15.662496 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:15.673126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:10:15.674703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:15.676111 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:10:15.677532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:15.681437 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:10:15.693185 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:10:15.695438 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:10:15.700094 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:10:15.700272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:10:15.711386 systemd[1]: Finished ensure-sysext.service. Mar 7 01:10:15.713365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:10:15.714001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:10:15.720895 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:10:15.753457 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:10:15.753949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:10:15.756397 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:10:15.756903 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:10:15.768421 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 01:10:15.774285 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:10:15.774907 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:10:15.777048 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:10:15.788246 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:10:15.791059 augenrules[1914]: No rules Mar 7 01:10:15.801830 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:10:15.804394 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:10:15.819576 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:10:15.831373 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:10:15.832333 lvm[1915]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:10:15.851651 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:10:15.852515 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:10:15.892738 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:10:15.894361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:15.906230 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:10:15.914878 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:10:15.934107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:15.951341 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:10:15.965927 systemd-networkd[1863]: lo: Link UP Mar 7 01:10:15.965944 systemd-networkd[1863]: lo: Gained carrier Mar 7 01:10:15.967810 systemd-networkd[1863]: Enumeration completed Mar 7 01:10:15.967944 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:10:15.969124 systemd-networkd[1863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:15.969137 systemd-networkd[1863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:10:15.972135 systemd-resolved[1865]: Positive Trust Anchors: Mar 7 01:10:15.972473 systemd-resolved[1865]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:10:15.972532 systemd-resolved[1865]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:10:15.975112 systemd-networkd[1863]: eth0: Link UP Mar 7 01:10:15.975393 systemd-networkd[1863]: eth0: Gained carrier Mar 7 01:10:15.975433 systemd-networkd[1863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:10:15.979242 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:10:15.979605 systemd-resolved[1865]: Defaulting to hostname 'linux'. Mar 7 01:10:15.983315 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:10:15.984846 systemd[1]: Reached target network.target - Network. Mar 7 01:10:15.985088 systemd-networkd[1863]: eth0: DHCPv4 address 172.31.24.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 01:10:15.985445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:15.986855 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:10:15.987616 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:10:15.988222 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:10:15.989411 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:10:15.990066 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:10:15.990632 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:10:15.991093 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:10:15.991131 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:10:15.991520 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:10:15.992345 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:10:15.994128 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:10:16.003397 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:10:16.004620 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:10:16.005196 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:10:16.005599 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:10:16.006036 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:10:16.006075 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:10:16.007372 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:10:16.011217 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 01:10:16.017175 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:10:16.020108 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:10:16.024183 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:10:16.024778 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:10:16.029145 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:10:16.033219 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 01:10:16.044213 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:10:16.050871 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 01:10:16.061215 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:10:16.082216 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:10:16.092588 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:10:16.094561 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:10:16.095560 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:10:16.101302 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:10:16.102910 jq[1944]: false Mar 7 01:10:16.112129 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:10:16.116465 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:10:16.117035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:10:16.124703 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:10:16.124961 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:10:16.150601 extend-filesystems[1945]: Found loop4 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found loop5 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found loop6 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found loop7 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found nvme0n1 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found nvme0n1p1 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found nvme0n1p2 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found nvme0n1p3 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found usr Mar 7 01:10:16.150601 extend-filesystems[1945]: Found nvme0n1p4 Mar 7 01:10:16.150601 extend-filesystems[1945]: Found nvme0n1p6 Mar 7 01:10:16.213651 extend-filesystems[1945]: Found nvme0n1p7 Mar 7 01:10:16.213651 extend-filesystems[1945]: Found nvme0n1p9 Mar 7 01:10:16.213651 extend-filesystems[1945]: Checking size of /dev/nvme0n1p9 Mar 7 01:10:16.218843 update_engine[1956]: I20260307 01:10:16.210184 1956 main.cc:92] Flatcar Update Engine starting Mar 7 01:10:16.219207 jq[1957]: true Mar 7 01:10:16.225580 extend-filesystems[1945]: Resized partition /dev/nvme0n1p9 Mar 7 01:10:16.227864 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:10:16.233477 dbus-daemon[1943]: [system] SELinux support is enabled Mar 7 01:10:16.228118 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:10:16.245207 extend-filesystems[1987]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:10:16.233675 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:10:16.238841 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:10:16.238877 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:10:16.240424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:10:16.240451 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:10:16.240848 (ntainerd)[1977]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:10:16.256079 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: ---------------------------------------------------- Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: corporation. Support and training for ntp-4 are Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: available at https://www.nwtime.org/support Mar 7 01:10:16.256125 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: ---------------------------------------------------- Mar 7 01:10:16.264913 tar[1959]: linux-amd64/LICENSE Mar 7 01:10:16.264913 tar[1959]: linux-amd64/helm Mar 7 01:10:16.251685 ntpd[1947]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:16:32 UTC 2026 (1): Starting Mar 7 01:10:16.265422 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: proto: precision = 0.083 usec (-23) Mar 7 01:10:16.251714 ntpd[1947]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 01:10:16.251728 ntpd[1947]: ---------------------------------------------------- Mar 7 01:10:16.251740 ntpd[1947]: ntp-4 is maintained by Network Time Foundation, Mar 7 01:10:16.251752 ntpd[1947]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 01:10:16.251763 ntpd[1947]: corporation. Support and training for ntp-4 are Mar 7 01:10:16.251774 ntpd[1947]: available at https://www.nwtime.org/support Mar 7 01:10:16.251787 ntpd[1947]: ---------------------------------------------------- Mar 7 01:10:16.256902 ntpd[1947]: proto: precision = 0.083 usec (-23) Mar 7 01:10:16.257089 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1863 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 01:10:16.286038 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: basedate set to 2026-02-22 Mar 7 01:10:16.286038 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: gps base set to 2026-02-22 (week 2407) Mar 7 01:10:16.276238 ntpd[1947]: basedate set to 2026-02-22 Mar 7 01:10:16.272722 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 01:10:16.276262 ntpd[1947]: gps base set to 2026-02-22 (week 2407) Mar 7 01:10:16.294778 ntpd[1947]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:10:16.297005 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 01:10:16.297005 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:10:16.294853 ntpd[1947]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 01:10:16.297311 ntpd[1947]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:10:16.299473 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Listen normally on 3 eth0 172.31.24.34:123 Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Listen normally on 4 lo [::1]:123 Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: bind(21) AF_INET6 fe80::46a:88ff:fe72:8fcf%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: unable to create socket on eth0 (5) for fe80::46a:88ff:fe72:8fcf%2#123 Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: failed to init interface for address fe80::46a:88ff:fe72:8fcf%2 Mar 7 01:10:16.304196 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: Listening on routing socket on fd #21 for interface updates Mar 7 01:10:16.297360 ntpd[1947]: Listen normally on 3 eth0 172.31.24.34:123 Mar 7 01:10:16.297401 ntpd[1947]: Listen normally on 4 lo [::1]:123 Mar 7 01:10:16.297449 ntpd[1947]: bind(21) AF_INET6 fe80::46a:88ff:fe72:8fcf%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:10:16.297470 ntpd[1947]: unable to create socket on eth0 (5) for fe80::46a:88ff:fe72:8fcf%2#123 Mar 7 01:10:16.297487 ntpd[1947]: failed to init interface for address fe80::46a:88ff:fe72:8fcf%2 Mar 7 01:10:16.297522 ntpd[1947]: Listening on routing socket on fd #21 for interface updates Mar 7 01:10:16.308588 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:10:16.331305 jq[1979]: true Mar 7 01:10:16.331449 update_engine[1956]: I20260307 01:10:16.310242 1956 update_check_scheduler.cc:74] Next update check in 11m50s Mar 7 01:10:16.326835 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 01:10:16.332839 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:10:16.337166 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:10:16.337166 ntpd[1947]: 7 Mar 01:10:16 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:10:16.332876 ntpd[1947]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 01:10:16.392416 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 01:10:16.405670 coreos-metadata[1942]: Mar 07 01:10:16.405 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:10:16.407254 coreos-metadata[1942]: Mar 07 01:10:16.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 01:10:16.407897 coreos-metadata[1942]: Mar 07 01:10:16.407 INFO Fetch successful Mar 7 01:10:16.408112 coreos-metadata[1942]: Mar 07 01:10:16.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 01:10:16.409061 extend-filesystems[1987]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 01:10:16.409061 extend-filesystems[1987]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 01:10:16.409061 extend-filesystems[1987]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 01:10:16.413454 extend-filesystems[1945]: Resized filesystem in /dev/nvme0n1p9 Mar 7 01:10:16.415765 coreos-metadata[1942]: Mar 07 01:10:16.410 INFO Fetch successful Mar 7 01:10:16.415765 coreos-metadata[1942]: Mar 07 01:10:16.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 01:10:16.415765 coreos-metadata[1942]: Mar 07 01:10:16.415 INFO Fetch successful Mar 7 01:10:16.415765 coreos-metadata[1942]: Mar 07 01:10:16.415 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 01:10:16.418023 coreos-metadata[1942]: Mar 07 01:10:16.417 INFO Fetch successful Mar 7 01:10:16.418023 coreos-metadata[1942]: Mar 07 01:10:16.417 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 01:10:16.419091 coreos-metadata[1942]: Mar 07 01:10:16.418 INFO Fetch failed with 404: resource not found Mar 7 01:10:16.419091 coreos-metadata[1942]: Mar 07 01:10:16.418 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 01:10:16.418926 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:10:16.419312 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:10:16.419911 coreos-metadata[1942]: Mar 07 01:10:16.419 INFO Fetch successful Mar 7 01:10:16.419911 coreos-metadata[1942]: Mar 07 01:10:16.419 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 01:10:16.420848 coreos-metadata[1942]: Mar 07 01:10:16.420 INFO Fetch successful Mar 7 01:10:16.420848 coreos-metadata[1942]: Mar 07 01:10:16.420 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 01:10:16.421448 coreos-metadata[1942]: Mar 07 01:10:16.421 INFO Fetch successful Mar 7 01:10:16.421448 coreos-metadata[1942]: Mar 07 01:10:16.421 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 01:10:16.422925 coreos-metadata[1942]: Mar 07 01:10:16.421 INFO Fetch successful Mar 7 01:10:16.422925 coreos-metadata[1942]: Mar 07 01:10:16.421 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 01:10:16.423685 coreos-metadata[1942]: Mar 07 01:10:16.423 INFO Fetch successful Mar 7 01:10:16.444288 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1763) Mar 7 01:10:16.461034 bash[2019]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:10:16.468248 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:10:16.478171 systemd[1]: Starting sshkeys.service... Mar 7 01:10:16.574483 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 01:10:16.586358 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 01:10:16.589349 systemd-logind[1955]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:10:16.589381 systemd-logind[1955]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 7 01:10:16.589403 systemd-logind[1955]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:10:16.589687 systemd-logind[1955]: New seat seat0. Mar 7 01:10:16.592054 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:10:16.596432 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 01:10:16.598735 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:10:16.757929 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 01:10:16.758915 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 01:10:16.763884 dbus-daemon[1943]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 01:10:16.774422 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 01:10:16.834712 sshd_keygen[1990]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:10:16.839327 polkitd[2081]: Started polkitd version 121 Mar 7 01:10:16.852640 polkitd[2081]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 01:10:16.862581 polkitd[2081]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 01:10:16.865374 polkitd[2081]: Finished loading, compiling and executing 2 rules Mar 7 01:10:16.876681 dbus-daemon[1943]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 01:10:16.878083 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 01:10:16.879794 polkitd[2081]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 01:10:16.890674 coreos-metadata[2041]: Mar 07 01:10:16.889 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 01:10:16.899236 coreos-metadata[2041]: Mar 07 01:10:16.896 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 01:10:16.899743 coreos-metadata[2041]: Mar 07 01:10:16.899 INFO Fetch successful Mar 7 01:10:16.899807 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:10:16.904669 coreos-metadata[2041]: Mar 07 01:10:16.899 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 01:10:16.904669 coreos-metadata[2041]: Mar 07 01:10:16.903 INFO Fetch successful Mar 7 01:10:16.904198 unknown[2041]: wrote ssh authorized keys file for user: core Mar 7 01:10:16.963159 systemd-hostnamed[1993]: Hostname set to (transient) Mar 7 01:10:16.963709 systemd-resolved[1865]: System hostname changed to 'ip-172-31-24-34'. Mar 7 01:10:16.978005 update-ssh-keys[2132]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:10:16.978719 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 01:10:16.988914 systemd[1]: Finished sshkeys.service. Mar 7 01:10:17.016508 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:10:17.024338 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:10:17.036937 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:10:17.037391 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:10:17.052073 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:10:17.079385 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:10:17.090451 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:10:17.097409 containerd[1977]: time="2026-03-07T01:10:17.097301685Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:10:17.102319 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:10:17.103602 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:10:17.145767 containerd[1977]: time="2026-03-07T01:10:17.145701795Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.147870557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.147916307Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.147941610Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148143113Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148166862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148242048Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148260585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148480859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148503485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148523555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:17.148645 containerd[1977]: time="2026-03-07T01:10:17.148538988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.149740 containerd[1977]: time="2026-03-07T01:10:17.148630064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.149740 containerd[1977]: time="2026-03-07T01:10:17.148923904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:10:17.149740 containerd[1977]: time="2026-03-07T01:10:17.149112269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:10:17.149740 containerd[1977]: time="2026-03-07T01:10:17.149135881Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:10:17.149740 containerd[1977]: time="2026-03-07T01:10:17.149235476Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:10:17.149740 containerd[1977]: time="2026-03-07T01:10:17.149290152Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:10:17.153671 containerd[1977]: time="2026-03-07T01:10:17.153632113Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:10:17.153766 containerd[1977]: time="2026-03-07T01:10:17.153694697Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:10:17.153766 containerd[1977]: time="2026-03-07T01:10:17.153716927Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:10:17.153766 containerd[1977]: time="2026-03-07T01:10:17.153736460Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:10:17.153883 containerd[1977]: time="2026-03-07T01:10:17.153774041Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:10:17.153962 containerd[1977]: time="2026-03-07T01:10:17.153940493Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:10:17.154329 containerd[1977]: time="2026-03-07T01:10:17.154305735Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:10:17.154487 containerd[1977]: time="2026-03-07T01:10:17.154455158Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:10:17.154487 containerd[1977]: time="2026-03-07T01:10:17.154479617Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:10:17.154587 containerd[1977]: time="2026-03-07T01:10:17.154498801Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:10:17.154587 containerd[1977]: time="2026-03-07T01:10:17.154520232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154587 containerd[1977]: time="2026-03-07T01:10:17.154539410Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154587 containerd[1977]: time="2026-03-07T01:10:17.154558051Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154587 containerd[1977]: time="2026-03-07T01:10:17.154582095Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154603673Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154624966Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154643890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154661839Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154703767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154724743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.154756 containerd[1977]: time="2026-03-07T01:10:17.154743609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154763640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154782168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154810308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154828964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154848002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154871969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154893765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154912506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.154931625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.155028889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.155050153Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.155089474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.155107896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155157 containerd[1977]: time="2026-03-07T01:10:17.155125283Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155185603Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155211763Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155447400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155481837Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155499715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155527749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155543874Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:10:17.155628 containerd[1977]: time="2026-03-07T01:10:17.155559974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:10:17.156580 containerd[1977]: time="2026-03-07T01:10:17.155959673Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:10:17.156580 containerd[1977]: time="2026-03-07T01:10:17.156058670Z" level=info msg="Connect containerd service" Mar 7 01:10:17.156580 containerd[1977]: time="2026-03-07T01:10:17.156111602Z" level=info msg="using legacy CRI server" Mar 7 01:10:17.156580 containerd[1977]: time="2026-03-07T01:10:17.156122448Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:10:17.156580 containerd[1977]: time="2026-03-07T01:10:17.156245043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157079895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157222105Z" level=info msg="Start subscribing containerd event" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157269319Z" level=info msg="Start recovering state" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157344246Z" level=info msg="Start event monitor" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157362180Z" level=info msg="Start snapshots syncer" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157375438Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:10:17.157472 containerd[1977]: time="2026-03-07T01:10:17.157385665Z" level=info msg="Start streaming server" Mar 7 01:10:17.158255 containerd[1977]: time="2026-03-07T01:10:17.157924328Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:10:17.158255 containerd[1977]: time="2026-03-07T01:10:17.158041027Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:10:17.158224 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:10:17.162689 containerd[1977]: time="2026-03-07T01:10:17.162652776Z" level=info msg="containerd successfully booted in 0.067908s" Mar 7 01:10:17.252245 ntpd[1947]: bind(24) AF_INET6 fe80::46a:88ff:fe72:8fcf%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:10:17.252736 ntpd[1947]: 7 Mar 01:10:17 ntpd[1947]: bind(24) AF_INET6 fe80::46a:88ff:fe72:8fcf%2#123 flags 0x11 failed: Cannot assign requested address Mar 7 01:10:17.252736 ntpd[1947]: 7 Mar 01:10:17 ntpd[1947]: unable to create socket on eth0 (6) for fe80::46a:88ff:fe72:8fcf%2#123 Mar 7 01:10:17.252736 ntpd[1947]: 7 Mar 01:10:17 ntpd[1947]: failed to init interface for address fe80::46a:88ff:fe72:8fcf%2 Mar 7 01:10:17.252290 ntpd[1947]: unable to create socket on eth0 (6) for fe80::46a:88ff:fe72:8fcf%2#123 Mar 7 01:10:17.252307 ntpd[1947]: failed to init interface for address fe80::46a:88ff:fe72:8fcf%2 Mar 7 01:10:17.433650 tar[1959]: linux-amd64/README.md Mar 7 01:10:17.445325 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:10:17.631252 systemd-networkd[1863]: eth0: Gained IPv6LL Mar 7 01:10:17.634530 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:10:17.635801 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:10:17.640410 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 01:10:17.644952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:17.650081 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:10:17.686260 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:10:17.711765 amazon-ssm-agent[2166]: Initializing new seelog logger Mar 7 01:10:17.712176 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete Mar 7 01:10:17.712176 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.712176 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.712453 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 processing appconfig overrides Mar 7 01:10:17.712826 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.712826 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.712946 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 processing appconfig overrides Mar 7 01:10:17.713202 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.713202 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.713309 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 processing appconfig overrides Mar 7 01:10:17.713685 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO Proxy environment variables: Mar 7 01:10:17.716267 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.716267 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 01:10:17.716404 amazon-ssm-agent[2166]: 2026/03/07 01:10:17 processing appconfig overrides Mar 7 01:10:17.814206 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO no_proxy: Mar 7 01:10:17.911676 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO https_proxy: Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO http_proxy: Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO Checking if agent identity type OnPrem can be assumed Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO Checking if agent identity type EC2 can be assumed Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO Agent will take identity from EC2 Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [Registrar] Starting registrar module Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [EC2Identity] EC2 registration was successful. Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [CredentialRefresher] credentialRefresher has started Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 01:10:17.947717 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 01:10:18.009486 amazon-ssm-agent[2166]: 2026-03-07 01:10:17 INFO [CredentialRefresher] Next credential rotation will be in 30.24998965485 minutes Mar 7 01:10:18.431847 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:10:18.439348 systemd[1]: Started sshd@0-172.31.24.34:22-68.220.241.50:34252.service - OpenSSH per-connection server daemon (68.220.241.50:34252). Mar 7 01:10:18.940632 sshd[2185]: Accepted publickey for core from 68.220.241.50 port 34252 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:18.943103 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:18.958249 systemd-logind[1955]: New session 1 of user core. Mar 7 01:10:18.959131 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:10:18.969202 amazon-ssm-agent[2166]: 2026-03-07 01:10:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 01:10:18.970277 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:10:19.000261 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:10:19.017216 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:10:19.022868 (systemd)[2192]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:10:19.072102 amazon-ssm-agent[2166]: 2026-03-07 01:10:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2190) started Mar 7 01:10:19.172057 amazon-ssm-agent[2166]: 2026-03-07 01:10:18 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 01:10:19.209658 systemd[2192]: Queued start job for default target default.target. Mar 7 01:10:19.216540 systemd[2192]: Created slice app.slice - User Application Slice. Mar 7 01:10:19.216583 systemd[2192]: Reached target paths.target - Paths. Mar 7 01:10:19.216603 systemd[2192]: Reached target timers.target - Timers. Mar 7 01:10:19.218114 systemd[2192]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:10:19.235667 systemd[2192]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:10:19.235837 systemd[2192]: Reached target sockets.target - Sockets. Mar 7 01:10:19.235860 systemd[2192]: Reached target basic.target - Basic System. Mar 7 01:10:19.236073 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:10:19.236619 systemd[2192]: Reached target default.target - Main User Target. Mar 7 01:10:19.236681 systemd[2192]: Startup finished in 202ms. Mar 7 01:10:19.242199 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:10:19.609345 systemd[1]: Started sshd@1-172.31.24.34:22-68.220.241.50:34268.service - OpenSSH per-connection server daemon (68.220.241.50:34268). Mar 7 01:10:20.088236 sshd[2212]: Accepted publickey for core from 68.220.241.50 port 34268 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:20.090201 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:20.095296 systemd-logind[1955]: New session 2 of user core. Mar 7 01:10:20.098179 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:10:20.161452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:20.163551 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:10:20.165320 systemd[1]: Startup finished in 611ms (kernel) + 8.606s (initrd) + 7.902s (userspace) = 17.120s. Mar 7 01:10:20.172602 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:10:20.252264 ntpd[1947]: Listen normally on 7 eth0 [fe80::46a:88ff:fe72:8fcf%2]:123 Mar 7 01:10:20.252679 ntpd[1947]: 7 Mar 01:10:20 ntpd[1947]: Listen normally on 7 eth0 [fe80::46a:88ff:fe72:8fcf%2]:123 Mar 7 01:10:20.441531 sshd[2212]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:20.445811 systemd[1]: sshd@1-172.31.24.34:22-68.220.241.50:34268.service: Deactivated successfully. Mar 7 01:10:20.448441 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:10:20.449893 systemd-logind[1955]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:10:20.451340 systemd-logind[1955]: Removed session 2. Mar 7 01:10:20.526206 systemd[1]: Started sshd@2-172.31.24.34:22-68.220.241.50:34280.service - OpenSSH per-connection server daemon (68.220.241.50:34280). Mar 7 01:10:21.015059 sshd[2233]: Accepted publickey for core from 68.220.241.50 port 34280 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:21.016390 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:21.021361 systemd-logind[1955]: New session 3 of user core. Mar 7 01:10:21.027987 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:10:21.216248 kubelet[2220]: E0307 01:10:21.216157 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:10:21.219015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:10:21.219243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:10:21.219585 systemd[1]: kubelet.service: Consumed 1.048s CPU time. Mar 7 01:10:21.358408 sshd[2233]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:21.361622 systemd[1]: sshd@2-172.31.24.34:22-68.220.241.50:34280.service: Deactivated successfully. Mar 7 01:10:21.363573 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:10:21.364885 systemd-logind[1955]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:10:21.366124 systemd-logind[1955]: Removed session 3. Mar 7 01:10:21.443245 systemd[1]: Started sshd@3-172.31.24.34:22-68.220.241.50:57416.service - OpenSSH per-connection server daemon (68.220.241.50:57416). Mar 7 01:10:21.924478 sshd[2242]: Accepted publickey for core from 68.220.241.50 port 57416 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:21.925929 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:21.931061 systemd-logind[1955]: New session 4 of user core. Mar 7 01:10:21.943268 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:10:22.273241 sshd[2242]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:22.277558 systemd-logind[1955]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:10:22.278292 systemd[1]: sshd@3-172.31.24.34:22-68.220.241.50:57416.service: Deactivated successfully. Mar 7 01:10:22.280362 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:10:22.281458 systemd-logind[1955]: Removed session 4. Mar 7 01:10:22.365344 systemd[1]: Started sshd@4-172.31.24.34:22-68.220.241.50:57424.service - OpenSSH per-connection server daemon (68.220.241.50:57424). Mar 7 01:10:22.854528 sshd[2249]: Accepted publickey for core from 68.220.241.50 port 57424 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:22.855303 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:22.860337 systemd-logind[1955]: New session 5 of user core. Mar 7 01:10:22.870216 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:10:23.169233 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:10:23.169626 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:10:23.185716 sudo[2252]: pam_unix(sudo:session): session closed for user root Mar 7 01:10:24.604916 systemd-resolved[1865]: Clock change detected. Flushing caches. Mar 7 01:10:24.616044 sshd[2249]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:24.620228 systemd-logind[1955]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:10:24.620805 systemd[1]: sshd@4-172.31.24.34:22-68.220.241.50:57424.service: Deactivated successfully. Mar 7 01:10:24.622729 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:10:24.624007 systemd-logind[1955]: Removed session 5. Mar 7 01:10:24.710225 systemd[1]: Started sshd@5-172.31.24.34:22-68.220.241.50:57430.service - OpenSSH per-connection server daemon (68.220.241.50:57430). Mar 7 01:10:25.200521 sshd[2257]: Accepted publickey for core from 68.220.241.50 port 57430 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:25.202005 sshd[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:25.207096 systemd-logind[1955]: New session 6 of user core. Mar 7 01:10:25.214111 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:10:25.477670 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:10:25.478091 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:10:25.482234 sudo[2261]: pam_unix(sudo:session): session closed for user root Mar 7 01:10:25.487640 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:10:25.488045 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:10:25.508274 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:10:25.510410 auditctl[2264]: No rules Mar 7 01:10:25.511842 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:10:25.512141 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:10:25.514513 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:10:25.558081 augenrules[2282]: No rules Mar 7 01:10:25.559585 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:10:25.561104 sudo[2260]: pam_unix(sudo:session): session closed for user root Mar 7 01:10:25.639180 sshd[2257]: pam_unix(sshd:session): session closed for user core Mar 7 01:10:25.643644 systemd-logind[1955]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:10:25.644072 systemd[1]: sshd@5-172.31.24.34:22-68.220.241.50:57430.service: Deactivated successfully. Mar 7 01:10:25.646075 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:10:25.647516 systemd-logind[1955]: Removed session 6. Mar 7 01:10:25.738305 systemd[1]: Started sshd@6-172.31.24.34:22-68.220.241.50:57440.service - OpenSSH per-connection server daemon (68.220.241.50:57440). Mar 7 01:10:26.218911 sshd[2290]: Accepted publickey for core from 68.220.241.50 port 57440 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:10:26.220015 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:10:26.225418 systemd-logind[1955]: New session 7 of user core. Mar 7 01:10:26.234090 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:10:26.491580 sudo[2293]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:10:26.492008 sudo[2293]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:10:27.008200 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:10:27.010006 (dockerd)[2308]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:10:27.592399 dockerd[2308]: time="2026-03-07T01:10:27.592335720Z" level=info msg="Starting up" Mar 7 01:10:27.831704 systemd[1]: var-lib-docker-metacopy\x2dcheck516470457-merged.mount: Deactivated successfully. Mar 7 01:10:27.851764 dockerd[2308]: time="2026-03-07T01:10:27.851636277Z" level=info msg="Loading containers: start." Mar 7 01:10:27.964910 kernel: Initializing XFRM netlink socket Mar 7 01:10:27.996365 (udev-worker)[2334]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:10:28.058513 systemd-networkd[1863]: docker0: Link UP Mar 7 01:10:28.081794 dockerd[2308]: time="2026-03-07T01:10:28.081746443Z" level=info msg="Loading containers: done." Mar 7 01:10:28.113593 dockerd[2308]: time="2026-03-07T01:10:28.113421876Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:10:28.113774 dockerd[2308]: time="2026-03-07T01:10:28.113589436Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:10:28.113774 dockerd[2308]: time="2026-03-07T01:10:28.113736200Z" level=info msg="Daemon has completed initialization" Mar 7 01:10:28.146701 dockerd[2308]: time="2026-03-07T01:10:28.146102368Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:10:28.146382 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:10:29.023458 containerd[1977]: time="2026-03-07T01:10:29.023421612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 01:10:29.545248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515955647.mount: Deactivated successfully. Mar 7 01:10:32.242270 containerd[1977]: time="2026-03-07T01:10:32.242208840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:32.243807 containerd[1977]: time="2026-03-07T01:10:32.243757623Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 7 01:10:32.245072 containerd[1977]: time="2026-03-07T01:10:32.244885860Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:32.247800 containerd[1977]: time="2026-03-07T01:10:32.247743640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:32.249262 containerd[1977]: time="2026-03-07T01:10:32.249223860Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.225767061s" Mar 7 01:10:32.249360 containerd[1977]: time="2026-03-07T01:10:32.249264203Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 7 01:10:32.250242 containerd[1977]: time="2026-03-07T01:10:32.250215324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 01:10:32.627972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:10:32.633597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:34.375688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:34.381797 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:10:34.427746 kubelet[2517]: E0307 01:10:34.427581 2517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:10:34.433523 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:10:34.433720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:10:35.803453 containerd[1977]: time="2026-03-07T01:10:35.803401594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:35.804754 containerd[1977]: time="2026-03-07T01:10:35.804706435Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 7 01:10:35.806214 containerd[1977]: time="2026-03-07T01:10:35.806174848Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:35.808853 containerd[1977]: time="2026-03-07T01:10:35.808785336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:35.810053 containerd[1977]: time="2026-03-07T01:10:35.809903665Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 3.559653678s" Mar 7 01:10:35.810053 containerd[1977]: time="2026-03-07T01:10:35.809946572Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 7 01:10:35.810766 containerd[1977]: time="2026-03-07T01:10:35.810736586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 01:10:37.664998 containerd[1977]: time="2026-03-07T01:10:37.664933223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:37.666786 containerd[1977]: time="2026-03-07T01:10:37.666638648Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 7 01:10:37.668743 containerd[1977]: time="2026-03-07T01:10:37.668674524Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:37.672751 containerd[1977]: time="2026-03-07T01:10:37.672686632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:37.674102 containerd[1977]: time="2026-03-07T01:10:37.673919487Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.863147344s" Mar 7 01:10:37.674102 containerd[1977]: time="2026-03-07T01:10:37.673963512Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 7 01:10:37.674999 containerd[1977]: time="2026-03-07T01:10:37.674966171Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 01:10:38.796702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1639313386.mount: Deactivated successfully. Mar 7 01:10:39.378124 containerd[1977]: time="2026-03-07T01:10:39.378068589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:39.379237 containerd[1977]: time="2026-03-07T01:10:39.379093157Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 7 01:10:39.380254 containerd[1977]: time="2026-03-07T01:10:39.380194661Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:39.382379 containerd[1977]: time="2026-03-07T01:10:39.382328680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:39.383450 containerd[1977]: time="2026-03-07T01:10:39.382987091Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.707982492s" Mar 7 01:10:39.383450 containerd[1977]: time="2026-03-07T01:10:39.383028208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 7 01:10:39.383623 containerd[1977]: time="2026-03-07T01:10:39.383601124Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 01:10:39.894375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931602739.mount: Deactivated successfully. Mar 7 01:10:41.189675 containerd[1977]: time="2026-03-07T01:10:41.189616610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:41.191247 containerd[1977]: time="2026-03-07T01:10:41.191194798Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 7 01:10:41.192220 containerd[1977]: time="2026-03-07T01:10:41.192167215Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:41.195967 containerd[1977]: time="2026-03-07T01:10:41.195890671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:41.197376 containerd[1977]: time="2026-03-07T01:10:41.197168468Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.813529209s" Mar 7 01:10:41.197376 containerd[1977]: time="2026-03-07T01:10:41.197212526Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 7 01:10:41.198075 containerd[1977]: time="2026-03-07T01:10:41.197911764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 01:10:41.907257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774193030.mount: Deactivated successfully. Mar 7 01:10:41.913143 containerd[1977]: time="2026-03-07T01:10:41.913088681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:41.914162 containerd[1977]: time="2026-03-07T01:10:41.914116092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 7 01:10:41.915575 containerd[1977]: time="2026-03-07T01:10:41.915432504Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:41.918023 containerd[1977]: time="2026-03-07T01:10:41.917958204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:41.919543 containerd[1977]: time="2026-03-07T01:10:41.918910756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 720.657538ms" Mar 7 01:10:41.919543 containerd[1977]: time="2026-03-07T01:10:41.918955901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 7 01:10:41.919896 containerd[1977]: time="2026-03-07T01:10:41.919845077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 01:10:42.409897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2841347354.mount: Deactivated successfully. Mar 7 01:10:43.734128 containerd[1977]: time="2026-03-07T01:10:43.734072057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:43.735851 containerd[1977]: time="2026-03-07T01:10:43.735604412Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 7 01:10:43.738302 containerd[1977]: time="2026-03-07T01:10:43.736793886Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:43.740244 containerd[1977]: time="2026-03-07T01:10:43.740206019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:10:43.741695 containerd[1977]: time="2026-03-07T01:10:43.741658358Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.821751413s" Mar 7 01:10:43.741774 containerd[1977]: time="2026-03-07T01:10:43.741701754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 7 01:10:44.627941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:10:44.640073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:44.931162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:44.934141 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:10:45.005895 kubelet[2689]: E0307 01:10:45.005627 2689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:10:45.009384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:10:45.009759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:10:47.495315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:47.501195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:47.537668 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-7.scope)... Mar 7 01:10:47.537692 systemd[1]: Reloading... Mar 7 01:10:47.690126 zram_generator::config[2744]: No configuration found. Mar 7 01:10:47.830042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:47.919482 systemd[1]: Reloading finished in 381 ms. Mar 7 01:10:47.973768 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:10:47.973905 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:10:47.974210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:47.979313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:48.207356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:48.217317 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:10:48.261077 kubelet[2807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:10:48.261077 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:10:48.261077 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:10:48.263328 kubelet[2807]: I0307 01:10:48.261833 2807 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:10:48.339147 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 01:10:48.584346 kubelet[2807]: I0307 01:10:48.583751 2807 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:10:48.584346 kubelet[2807]: I0307 01:10:48.583781 2807 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:10:48.584346 kubelet[2807]: I0307 01:10:48.584112 2807 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:10:48.689907 kubelet[2807]: I0307 01:10:48.689637 2807 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:10:48.690302 kubelet[2807]: E0307 01:10:48.690263 2807 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:10:48.706743 kubelet[2807]: E0307 01:10:48.706582 2807 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:10:48.706743 kubelet[2807]: I0307 01:10:48.706736 2807 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:10:48.715629 kubelet[2807]: I0307 01:10:48.715590 2807 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:10:48.718794 kubelet[2807]: I0307 01:10:48.718729 2807 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:10:48.722442 kubelet[2807]: I0307 01:10:48.718787 2807 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:10:48.722442 kubelet[2807]: I0307 01:10:48.722443 2807 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:10:48.722758 kubelet[2807]: I0307 01:10:48.722462 2807 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:10:48.722758 kubelet[2807]: I0307 01:10:48.722747 2807 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:10:48.729221 kubelet[2807]: I0307 01:10:48.729183 2807 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:10:48.729221 kubelet[2807]: I0307 01:10:48.729222 2807 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:10:48.729411 kubelet[2807]: I0307 01:10:48.729260 2807 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:10:48.733228 kubelet[2807]: I0307 01:10:48.733197 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:10:48.739326 kubelet[2807]: E0307 01:10:48.739146 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-34&limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:10:48.739615 kubelet[2807]: I0307 01:10:48.739561 2807 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:10:48.741428 kubelet[2807]: I0307 01:10:48.741400 2807 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:10:48.743522 kubelet[2807]: W0307 01:10:48.742755 2807 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:10:48.743522 kubelet[2807]: E0307 01:10:48.743400 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:10:48.750214 kubelet[2807]: I0307 01:10:48.750183 2807 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:10:48.750348 kubelet[2807]: I0307 01:10:48.750256 2807 server.go:1289] "Started kubelet" Mar 7 01:10:48.750495 kubelet[2807]: I0307 01:10:48.750462 2807 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:10:48.751779 kubelet[2807]: I0307 01:10:48.751702 2807 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:10:48.754119 kubelet[2807]: I0307 01:10:48.753662 2807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:10:48.754333 kubelet[2807]: I0307 01:10:48.754294 2807 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:10:48.756561 kubelet[2807]: E0307 01:10:48.754544 2807 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.34:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-34.189a69e810482c3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-34,UID:ip-172-31-24-34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-34,},FirstTimestamp:2026-03-07 01:10:48.75020601 +0000 UTC m=+0.528098490,LastTimestamp:2026-03-07 01:10:48.75020601 +0000 UTC m=+0.528098490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-34,}" Mar 7 01:10:48.757719 kubelet[2807]: I0307 01:10:48.757621 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:10:48.758401 kubelet[2807]: I0307 01:10:48.758287 2807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:10:48.761193 kubelet[2807]: E0307 01:10:48.760910 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:48.761193 kubelet[2807]: I0307 01:10:48.760953 2807 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:10:48.761327 kubelet[2807]: I0307 01:10:48.761211 2807 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:10:48.761327 kubelet[2807]: I0307 01:10:48.761263 2807 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:10:48.761884 kubelet[2807]: E0307 01:10:48.761737 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:10:48.765442 kubelet[2807]: E0307 01:10:48.764275 2807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-34?timeout=10s\": dial tcp 172.31.24.34:6443: connect: connection refused" interval="200ms" Mar 7 01:10:48.772796 kubelet[2807]: I0307 01:10:48.772760 2807 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:10:48.783954 kubelet[2807]: I0307 01:10:48.783925 2807 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:10:48.784837 kubelet[2807]: I0307 01:10:48.784126 2807 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:10:48.794060 kubelet[2807]: I0307 01:10:48.794018 2807 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:10:48.796473 kubelet[2807]: I0307 01:10:48.796439 2807 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:10:48.796473 kubelet[2807]: I0307 01:10:48.796471 2807 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:10:48.796619 kubelet[2807]: I0307 01:10:48.796496 2807 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:10:48.796619 kubelet[2807]: I0307 01:10:48.796505 2807 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:10:48.796619 kubelet[2807]: E0307 01:10:48.796552 2807 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:10:48.810085 kubelet[2807]: E0307 01:10:48.810057 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:10:48.816990 kubelet[2807]: I0307 01:10:48.816959 2807 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:10:48.816990 kubelet[2807]: I0307 01:10:48.816980 2807 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:10:48.817161 kubelet[2807]: I0307 01:10:48.817001 2807 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:10:48.818835 kubelet[2807]: I0307 01:10:48.818793 2807 policy_none.go:49] "None policy: Start" Mar 7 01:10:48.818835 kubelet[2807]: I0307 01:10:48.818816 2807 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:10:48.818835 kubelet[2807]: I0307 01:10:48.818831 2807 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:10:48.825519 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:10:48.851079 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:10:48.855782 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:10:48.861686 kubelet[2807]: E0307 01:10:48.861608 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:48.863922 kubelet[2807]: E0307 01:10:48.863895 2807 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:10:48.864258 kubelet[2807]: I0307 01:10:48.864132 2807 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:10:48.864258 kubelet[2807]: I0307 01:10:48.864144 2807 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:10:48.864898 kubelet[2807]: I0307 01:10:48.864723 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:10:48.866750 kubelet[2807]: E0307 01:10:48.866541 2807 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:10:48.866750 kubelet[2807]: E0307 01:10:48.866585 2807 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-34\" not found" Mar 7 01:10:48.911062 systemd[1]: Created slice kubepods-burstable-pod4823808f481d520d9108c54bc3303e4e.slice - libcontainer container kubepods-burstable-pod4823808f481d520d9108c54bc3303e4e.slice. Mar 7 01:10:48.923666 kubelet[2807]: E0307 01:10:48.922902 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:48.926226 systemd[1]: Created slice kubepods-burstable-pod5dbcb46d561f0cf9b2d7a695570efb7a.slice - libcontainer container kubepods-burstable-pod5dbcb46d561f0cf9b2d7a695570efb7a.slice. Mar 7 01:10:48.929066 kubelet[2807]: E0307 01:10:48.928800 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:48.931474 systemd[1]: Created slice kubepods-burstable-pod03e3800edee052738755fa2d6169a4e8.slice - libcontainer container kubepods-burstable-pod03e3800edee052738755fa2d6169a4e8.slice. Mar 7 01:10:48.933374 kubelet[2807]: E0307 01:10:48.933341 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:48.965443 kubelet[2807]: E0307 01:10:48.965391 2807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-34?timeout=10s\": dial tcp 172.31.24.34:6443: connect: connection refused" interval="400ms" Mar 7 01:10:48.966649 kubelet[2807]: I0307 01:10:48.966567 2807 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:48.967052 kubelet[2807]: E0307 01:10:48.967021 2807 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.34:6443/api/v1/nodes\": dial tcp 172.31.24.34:6443: connect: connection refused" node="ip-172-31-24-34" Mar 7 01:10:49.062582 kubelet[2807]: I0307 01:10:49.062420 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4823808f481d520d9108c54bc3303e4e-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-34\" (UID: \"4823808f481d520d9108c54bc3303e4e\") " pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:49.062582 kubelet[2807]: I0307 01:10:49.062472 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:49.062582 kubelet[2807]: I0307 01:10:49.062497 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:49.062582 kubelet[2807]: I0307 01:10:49.062526 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:49.062582 kubelet[2807]: I0307 01:10:49.062552 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4823808f481d520d9108c54bc3303e4e-ca-certs\") pod \"kube-apiserver-ip-172-31-24-34\" (UID: \"4823808f481d520d9108c54bc3303e4e\") " pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:49.063061 kubelet[2807]: I0307 01:10:49.062576 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4823808f481d520d9108c54bc3303e4e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-34\" (UID: \"4823808f481d520d9108c54bc3303e4e\") " pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:49.063061 kubelet[2807]: I0307 01:10:49.062724 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:49.063061 kubelet[2807]: I0307 01:10:49.062763 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:49.063061 kubelet[2807]: I0307 01:10:49.062787 2807 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03e3800edee052738755fa2d6169a4e8-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-34\" (UID: \"03e3800edee052738755fa2d6169a4e8\") " pod="kube-system/kube-scheduler-ip-172-31-24-34" Mar 7 01:10:49.169364 kubelet[2807]: I0307 01:10:49.169323 2807 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:49.169735 kubelet[2807]: E0307 01:10:49.169703 2807 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.34:6443/api/v1/nodes\": dial tcp 172.31.24.34:6443: connect: connection refused" node="ip-172-31-24-34" Mar 7 01:10:49.224895 containerd[1977]: time="2026-03-07T01:10:49.224835909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-34,Uid:4823808f481d520d9108c54bc3303e4e,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:49.230194 containerd[1977]: time="2026-03-07T01:10:49.230151322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-34,Uid:5dbcb46d561f0cf9b2d7a695570efb7a,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:49.234329 containerd[1977]: time="2026-03-07T01:10:49.234290366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-34,Uid:03e3800edee052738755fa2d6169a4e8,Namespace:kube-system,Attempt:0,}" Mar 7 01:10:49.365816 kubelet[2807]: E0307 01:10:49.365775 2807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-34?timeout=10s\": dial tcp 172.31.24.34:6443: connect: connection refused" interval="800ms" Mar 7 01:10:49.571256 kubelet[2807]: I0307 01:10:49.571160 2807 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:49.571671 kubelet[2807]: E0307 01:10:49.571516 2807 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.34:6443/api/v1/nodes\": dial tcp 172.31.24.34:6443: connect: connection refused" node="ip-172-31-24-34" Mar 7 01:10:49.700789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820903661.mount: Deactivated successfully. Mar 7 01:10:49.709962 kubelet[2807]: E0307 01:10:49.709911 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:10:49.715837 containerd[1977]: time="2026-03-07T01:10:49.715013500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:10:49.715837 containerd[1977]: time="2026-03-07T01:10:49.715268297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:10:49.717896 containerd[1977]: time="2026-03-07T01:10:49.717638245Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:10:49.718248 containerd[1977]: time="2026-03-07T01:10:49.718216704Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:10:49.719055 containerd[1977]: time="2026-03-07T01:10:49.719018129Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:10:49.719932 containerd[1977]: time="2026-03-07T01:10:49.719765015Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:10:49.719932 containerd[1977]: time="2026-03-07T01:10:49.719888247Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:10:49.722890 containerd[1977]: time="2026-03-07T01:10:49.721962286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 491.723996ms" Mar 7 01:10:49.722890 containerd[1977]: time="2026-03-07T01:10:49.722793198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:10:49.725920 containerd[1977]: time="2026-03-07T01:10:49.725888165Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.932226ms" Mar 7 01:10:49.727287 containerd[1977]: time="2026-03-07T01:10:49.727250662Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.882788ms" Mar 7 01:10:49.948055 kubelet[2807]: E0307 01:10:49.940185 2807 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.34:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-34.189a69e810482c3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-34,UID:ip-172-31-24-34,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-34,},FirstTimestamp:2026-03-07 01:10:48.75020601 +0000 UTC m=+0.528098490,LastTimestamp:2026-03-07 01:10:48.75020601 +0000 UTC m=+0.528098490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-34,}" Mar 7 01:10:49.987540 containerd[1977]: time="2026-03-07T01:10:49.987422108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.987540 containerd[1977]: time="2026-03-07T01:10:49.987411017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.989149 containerd[1977]: time="2026-03-07T01:10:49.989000597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.989149 containerd[1977]: time="2026-03-07T01:10:49.989017816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.989340 containerd[1977]: time="2026-03-07T01:10:49.987485485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.989340 containerd[1977]: time="2026-03-07T01:10:49.988606122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.989340 containerd[1977]: time="2026-03-07T01:10:49.988774998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.990138 containerd[1977]: time="2026-03-07T01:10:49.989963393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.993899 containerd[1977]: time="2026-03-07T01:10:49.993749636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:10:49.994038 containerd[1977]: time="2026-03-07T01:10:49.993860299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:10:49.994038 containerd[1977]: time="2026-03-07T01:10:49.994011966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:49.994346 containerd[1977]: time="2026-03-07T01:10:49.994237904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:10:50.018179 kubelet[2807]: E0307 01:10:50.018126 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:10:50.032510 kubelet[2807]: E0307 01:10:50.032356 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:10:50.034064 systemd[1]: Started cri-containerd-49b8ddfb66c9438e5ea913138094308015c3a4469f48f7c45e50c0ae6f42f524.scope - libcontainer container 49b8ddfb66c9438e5ea913138094308015c3a4469f48f7c45e50c0ae6f42f524. Mar 7 01:10:50.040143 systemd[1]: Started cri-containerd-23a2ca0b5b8ae9a5d1334f01665a11d1e3b87b762db890da74a90f35a6135660.scope - libcontainer container 23a2ca0b5b8ae9a5d1334f01665a11d1e3b87b762db890da74a90f35a6135660. Mar 7 01:10:50.042307 systemd[1]: Started cri-containerd-8b0f40a10d1a2be63a8bd24480c1f03ce0b205b3bbd81e7e3d184261b7d6cd4f.scope - libcontainer container 8b0f40a10d1a2be63a8bd24480c1f03ce0b205b3bbd81e7e3d184261b7d6cd4f. Mar 7 01:10:50.123417 containerd[1977]: time="2026-03-07T01:10:50.123240451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-34,Uid:5dbcb46d561f0cf9b2d7a695570efb7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"23a2ca0b5b8ae9a5d1334f01665a11d1e3b87b762db890da74a90f35a6135660\"" Mar 7 01:10:50.130440 containerd[1977]: time="2026-03-07T01:10:50.130400128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-34,Uid:4823808f481d520d9108c54bc3303e4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b0f40a10d1a2be63a8bd24480c1f03ce0b205b3bbd81e7e3d184261b7d6cd4f\"" Mar 7 01:10:50.137758 containerd[1977]: time="2026-03-07T01:10:50.137654822Z" level=info msg="CreateContainer within sandbox \"8b0f40a10d1a2be63a8bd24480c1f03ce0b205b3bbd81e7e3d184261b7d6cd4f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:10:50.139962 containerd[1977]: time="2026-03-07T01:10:50.139630217Z" level=info msg="CreateContainer within sandbox \"23a2ca0b5b8ae9a5d1334f01665a11d1e3b87b762db890da74a90f35a6135660\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:10:50.157175 containerd[1977]: time="2026-03-07T01:10:50.157121405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-34,Uid:03e3800edee052738755fa2d6169a4e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"49b8ddfb66c9438e5ea913138094308015c3a4469f48f7c45e50c0ae6f42f524\"" Mar 7 01:10:50.162504 containerd[1977]: time="2026-03-07T01:10:50.162474575Z" level=info msg="CreateContainer within sandbox \"49b8ddfb66c9438e5ea913138094308015c3a4469f48f7c45e50c0ae6f42f524\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:10:50.166480 kubelet[2807]: E0307 01:10:50.166442 2807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-34?timeout=10s\": dial tcp 172.31.24.34:6443: connect: connection refused" interval="1.6s" Mar 7 01:10:50.201673 containerd[1977]: time="2026-03-07T01:10:50.201354201Z" level=info msg="CreateContainer within sandbox \"49b8ddfb66c9438e5ea913138094308015c3a4469f48f7c45e50c0ae6f42f524\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0241493e38c9f08c1a268fd62d1d4b0f87429bacc56bd6465c3cc56d90fcc2b3\"" Mar 7 01:10:50.203080 containerd[1977]: time="2026-03-07T01:10:50.203039058Z" level=info msg="CreateContainer within sandbox \"8b0f40a10d1a2be63a8bd24480c1f03ce0b205b3bbd81e7e3d184261b7d6cd4f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88494849585f45d5fa4c2b2bd71cb0b6aa71daedf19f803caddafcd40a995eac\"" Mar 7 01:10:50.203454 containerd[1977]: time="2026-03-07T01:10:50.203322936Z" level=info msg="StartContainer for \"0241493e38c9f08c1a268fd62d1d4b0f87429bacc56bd6465c3cc56d90fcc2b3\"" Mar 7 01:10:50.205893 containerd[1977]: time="2026-03-07T01:10:50.204572362Z" level=info msg="CreateContainer within sandbox \"23a2ca0b5b8ae9a5d1334f01665a11d1e3b87b762db890da74a90f35a6135660\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c4357b58475827b3f2f1bed08fbaf0737c2f2c7e4716882b79cb3af03275b71\"" Mar 7 01:10:50.205893 containerd[1977]: time="2026-03-07T01:10:50.204738272Z" level=info msg="StartContainer for \"88494849585f45d5fa4c2b2bd71cb0b6aa71daedf19f803caddafcd40a995eac\"" Mar 7 01:10:50.217073 containerd[1977]: time="2026-03-07T01:10:50.217034954Z" level=info msg="StartContainer for \"0c4357b58475827b3f2f1bed08fbaf0737c2f2c7e4716882b79cb3af03275b71\"" Mar 7 01:10:50.247112 systemd[1]: Started cri-containerd-0241493e38c9f08c1a268fd62d1d4b0f87429bacc56bd6465c3cc56d90fcc2b3.scope - libcontainer container 0241493e38c9f08c1a268fd62d1d4b0f87429bacc56bd6465c3cc56d90fcc2b3. Mar 7 01:10:50.260153 systemd[1]: Started cri-containerd-88494849585f45d5fa4c2b2bd71cb0b6aa71daedf19f803caddafcd40a995eac.scope - libcontainer container 88494849585f45d5fa4c2b2bd71cb0b6aa71daedf19f803caddafcd40a995eac. Mar 7 01:10:50.280652 kubelet[2807]: E0307 01:10:50.280611 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-34&limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:10:50.299577 systemd[1]: Started cri-containerd-0c4357b58475827b3f2f1bed08fbaf0737c2f2c7e4716882b79cb3af03275b71.scope - libcontainer container 0c4357b58475827b3f2f1bed08fbaf0737c2f2c7e4716882b79cb3af03275b71. Mar 7 01:10:50.344311 containerd[1977]: time="2026-03-07T01:10:50.344273773Z" level=info msg="StartContainer for \"0241493e38c9f08c1a268fd62d1d4b0f87429bacc56bd6465c3cc56d90fcc2b3\" returns successfully" Mar 7 01:10:50.366406 containerd[1977]: time="2026-03-07T01:10:50.366348501Z" level=info msg="StartContainer for \"88494849585f45d5fa4c2b2bd71cb0b6aa71daedf19f803caddafcd40a995eac\" returns successfully" Mar 7 01:10:50.375163 kubelet[2807]: I0307 01:10:50.375122 2807 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:50.375678 kubelet[2807]: E0307 01:10:50.375504 2807 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.34:6443/api/v1/nodes\": dial tcp 172.31.24.34:6443: connect: connection refused" node="ip-172-31-24-34" Mar 7 01:10:50.401254 containerd[1977]: time="2026-03-07T01:10:50.401206902Z" level=info msg="StartContainer for \"0c4357b58475827b3f2f1bed08fbaf0737c2f2c7e4716882b79cb3af03275b71\" returns successfully" Mar 7 01:10:50.825676 kubelet[2807]: E0307 01:10:50.825193 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:50.827715 kubelet[2807]: E0307 01:10:50.827670 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:50.830911 kubelet[2807]: E0307 01:10:50.828803 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:50.863746 kubelet[2807]: E0307 01:10:50.863691 2807 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:10:51.671249 kubelet[2807]: E0307 01:10:51.671202 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:10:51.767160 kubelet[2807]: E0307 01:10:51.767105 2807 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-34?timeout=10s\": dial tcp 172.31.24.34:6443: connect: connection refused" interval="3.2s" Mar 7 01:10:51.831901 kubelet[2807]: E0307 01:10:51.831094 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:51.831901 kubelet[2807]: E0307 01:10:51.831542 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:51.835620 kubelet[2807]: E0307 01:10:51.835593 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:10:51.978089 kubelet[2807]: I0307 01:10:51.977522 2807 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:51.978776 kubelet[2807]: E0307 01:10:51.978739 2807 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.34:6443/api/v1/nodes\": dial tcp 172.31.24.34:6443: connect: connection refused" node="ip-172-31-24-34" Mar 7 01:10:52.564927 kubelet[2807]: E0307 01:10:52.564856 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:10:53.249535 kubelet[2807]: E0307 01:10:53.249493 2807 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-34&limit=500&resourceVersion=0\": dial tcp 172.31.24.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:10:54.108972 kubelet[2807]: E0307 01:10:54.108943 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:55.180386 kubelet[2807]: I0307 01:10:55.180354 2807 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:56.269122 kubelet[2807]: E0307 01:10:56.269070 2807 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:56.408890 kubelet[2807]: I0307 01:10:56.407598 2807 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-34" Mar 7 01:10:56.408890 kubelet[2807]: E0307 01:10:56.407646 2807 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-34\": node \"ip-172-31-24-34\" not found" Mar 7 01:10:56.431321 kubelet[2807]: E0307 01:10:56.431279 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:56.497241 kubelet[2807]: E0307 01:10:56.497213 2807 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-34\" not found" node="ip-172-31-24-34" Mar 7 01:10:56.531734 kubelet[2807]: E0307 01:10:56.531598 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:56.632718 kubelet[2807]: E0307 01:10:56.632662 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:56.733548 kubelet[2807]: E0307 01:10:56.733492 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:56.834148 kubelet[2807]: E0307 01:10:56.834002 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:56.934510 kubelet[2807]: E0307 01:10:56.934456 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.035199 kubelet[2807]: E0307 01:10:57.035138 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.136075 kubelet[2807]: E0307 01:10:57.136031 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.237247 kubelet[2807]: E0307 01:10:57.237198 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.338351 kubelet[2807]: E0307 01:10:57.338309 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.439183 kubelet[2807]: E0307 01:10:57.439058 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.539825 kubelet[2807]: E0307 01:10:57.539784 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.640435 kubelet[2807]: E0307 01:10:57.640374 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.740623 kubelet[2807]: E0307 01:10:57.740498 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.841235 kubelet[2807]: E0307 01:10:57.841196 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:57.942385 kubelet[2807]: E0307 01:10:57.942334 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.043314 kubelet[2807]: E0307 01:10:58.043184 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.144092 kubelet[2807]: E0307 01:10:58.144039 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.244464 kubelet[2807]: E0307 01:10:58.244412 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.345199 kubelet[2807]: E0307 01:10:58.345093 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.445709 kubelet[2807]: E0307 01:10:58.445660 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.546396 kubelet[2807]: E0307 01:10:58.546348 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.647388 kubelet[2807]: E0307 01:10:58.647346 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.679767 systemd[1]: Reloading requested from client PID 3095 ('systemctl') (unit session-7.scope)... Mar 7 01:10:58.679789 systemd[1]: Reloading... Mar 7 01:10:58.747862 kubelet[2807]: E0307 01:10:58.747816 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.776897 zram_generator::config[3135]: No configuration found. Mar 7 01:10:58.848531 kubelet[2807]: E0307 01:10:58.848490 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.867012 kubelet[2807]: E0307 01:10:58.866705 2807 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.949270 kubelet[2807]: E0307 01:10:58.949152 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:58.960395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:59.050414 kubelet[2807]: E0307 01:10:59.050346 2807 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:59.076374 systemd[1]: Reloading finished in 396 ms. Mar 7 01:10:59.120373 kubelet[2807]: I0307 01:10:59.120289 2807 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:10:59.120842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:59.139555 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:10:59.139811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:59.146233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:10:59.425627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:10:59.439368 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:10:59.513612 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:10:59.513612 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:10:59.513612 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:10:59.514084 kubelet[3195]: I0307 01:10:59.513709 3195 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:10:59.526113 kubelet[3195]: I0307 01:10:59.526074 3195 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 01:10:59.526113 kubelet[3195]: I0307 01:10:59.526099 3195 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:10:59.526402 kubelet[3195]: I0307 01:10:59.526377 3195 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:10:59.528923 kubelet[3195]: I0307 01:10:59.528088 3195 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:10:59.535141 kubelet[3195]: I0307 01:10:59.535095 3195 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:10:59.538975 kubelet[3195]: E0307 01:10:59.538930 3195 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:10:59.538975 kubelet[3195]: I0307 01:10:59.538959 3195 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 01:10:59.547885 kubelet[3195]: I0307 01:10:59.547845 3195 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 01:10:59.549546 kubelet[3195]: I0307 01:10:59.549099 3195 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:10:59.549546 kubelet[3195]: I0307 01:10:59.549297 3195 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-34","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:10:59.549546 kubelet[3195]: I0307 01:10:59.549506 3195 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:10:59.549546 kubelet[3195]: I0307 01:10:59.549521 3195 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 01:10:59.549929 kubelet[3195]: I0307 01:10:59.549578 3195 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:10:59.549929 kubelet[3195]: I0307 01:10:59.549770 3195 kubelet.go:480] "Attempting to sync node with API server" Mar 7 01:10:59.549929 kubelet[3195]: I0307 01:10:59.549787 3195 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:10:59.549929 kubelet[3195]: I0307 01:10:59.549819 3195 kubelet.go:386] "Adding apiserver pod source" Mar 7 01:10:59.549929 kubelet[3195]: I0307 01:10:59.549842 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:10:59.555896 kubelet[3195]: I0307 01:10:59.554998 3195 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:10:59.555896 kubelet[3195]: I0307 01:10:59.555686 3195 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:10:59.559857 kubelet[3195]: I0307 01:10:59.559206 3195 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 01:10:59.559857 kubelet[3195]: I0307 01:10:59.559250 3195 server.go:1289] "Started kubelet" Mar 7 01:10:59.563221 kubelet[3195]: I0307 01:10:59.563195 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:10:59.572897 kubelet[3195]: I0307 01:10:59.571688 3195 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:10:59.573894 kubelet[3195]: I0307 01:10:59.573070 3195 server.go:317] "Adding debug handlers to kubelet server" Mar 7 01:10:59.579292 kubelet[3195]: I0307 01:10:59.579204 3195 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 01:10:59.579554 kubelet[3195]: E0307 01:10:59.579529 3195 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-34\" not found" Mar 7 01:10:59.579822 kubelet[3195]: I0307 01:10:59.579805 3195 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 01:10:59.580635 kubelet[3195]: I0307 01:10:59.580553 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:10:59.580823 kubelet[3195]: I0307 01:10:59.580805 3195 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:10:59.581106 kubelet[3195]: I0307 01:10:59.581086 3195 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:10:59.582034 kubelet[3195]: I0307 01:10:59.582014 3195 reconciler.go:26] "Reconciler: start to sync state" Mar 7 01:10:59.591385 kubelet[3195]: I0307 01:10:59.591057 3195 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:10:59.591385 kubelet[3195]: I0307 01:10:59.591180 3195 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:10:59.592831 kubelet[3195]: I0307 01:10:59.592796 3195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 01:10:59.594930 kubelet[3195]: I0307 01:10:59.594121 3195 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 01:10:59.594930 kubelet[3195]: I0307 01:10:59.594141 3195 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 01:10:59.594930 kubelet[3195]: I0307 01:10:59.594164 3195 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:10:59.594930 kubelet[3195]: I0307 01:10:59.594173 3195 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 01:10:59.594930 kubelet[3195]: E0307 01:10:59.594219 3195 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:10:59.612931 kubelet[3195]: I0307 01:10:59.612665 3195 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:10:59.614232 kubelet[3195]: E0307 01:10:59.613179 3195 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:10:59.677946 kubelet[3195]: I0307 01:10:59.677828 3195 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:10:59.678392 kubelet[3195]: I0307 01:10:59.678099 3195 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:10:59.678392 kubelet[3195]: I0307 01:10:59.678126 3195 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:10:59.678392 kubelet[3195]: I0307 01:10:59.678321 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:10:59.678392 kubelet[3195]: I0307 01:10:59.678332 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:10:59.678392 kubelet[3195]: I0307 01:10:59.678353 3195 policy_none.go:49] "None policy: Start" Mar 7 01:10:59.678392 kubelet[3195]: I0307 01:10:59.678366 3195 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 01:10:59.679083 kubelet[3195]: I0307 01:10:59.678378 3195 state_mem.go:35] "Initializing new in-memory state store" Mar 7 01:10:59.679083 kubelet[3195]: I0307 01:10:59.678993 3195 state_mem.go:75] "Updated machine memory state" Mar 7 01:10:59.684919 kubelet[3195]: E0307 01:10:59.684721 3195 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:10:59.687126 kubelet[3195]: I0307 01:10:59.687016 3195 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:10:59.687126 kubelet[3195]: I0307 01:10:59.687055 3195 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:10:59.688018 kubelet[3195]: I0307 01:10:59.687808 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:10:59.692518 kubelet[3195]: E0307 01:10:59.691996 3195 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:10:59.696157 kubelet[3195]: I0307 01:10:59.696115 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-34" Mar 7 01:10:59.699904 kubelet[3195]: I0307 01:10:59.699862 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:59.702259 kubelet[3195]: I0307 01:10:59.702240 3195 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:59.729435 sudo[3230]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:10:59.730024 sudo[3230]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:10:59.782339 kubelet[3195]: I0307 01:10:59.782272 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:59.782502 kubelet[3195]: I0307 01:10:59.782352 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:59.782502 kubelet[3195]: I0307 01:10:59.782391 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03e3800edee052738755fa2d6169a4e8-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-34\" (UID: \"03e3800edee052738755fa2d6169a4e8\") " pod="kube-system/kube-scheduler-ip-172-31-24-34" Mar 7 01:10:59.782502 kubelet[3195]: I0307 01:10:59.782426 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4823808f481d520d9108c54bc3303e4e-ca-certs\") pod \"kube-apiserver-ip-172-31-24-34\" (UID: \"4823808f481d520d9108c54bc3303e4e\") " pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:59.782502 kubelet[3195]: I0307 01:10:59.782456 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4823808f481d520d9108c54bc3303e4e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-34\" (UID: \"4823808f481d520d9108c54bc3303e4e\") " pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:59.782502 kubelet[3195]: I0307 01:10:59.782481 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:59.782851 kubelet[3195]: I0307 01:10:59.782504 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:59.782851 kubelet[3195]: I0307 01:10:59.782649 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5dbcb46d561f0cf9b2d7a695570efb7a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-34\" (UID: \"5dbcb46d561f0cf9b2d7a695570efb7a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-34" Mar 7 01:10:59.782851 kubelet[3195]: I0307 01:10:59.782685 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4823808f481d520d9108c54bc3303e4e-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-34\" (UID: \"4823808f481d520d9108c54bc3303e4e\") " pod="kube-system/kube-apiserver-ip-172-31-24-34" Mar 7 01:10:59.806248 kubelet[3195]: I0307 01:10:59.806220 3195 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-34" Mar 7 01:10:59.819773 kubelet[3195]: I0307 01:10:59.819328 3195 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-34" Mar 7 01:10:59.819773 kubelet[3195]: I0307 01:10:59.819417 3195 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-34" Mar 7 01:11:00.557215 kubelet[3195]: I0307 01:11:00.556559 3195 apiserver.go:52] "Watching apiserver" Mar 7 01:11:00.580008 kubelet[3195]: I0307 01:11:00.579962 3195 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 01:11:00.671123 sudo[3230]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:00.703419 kubelet[3195]: I0307 01:11:00.703281 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-34" podStartSLOduration=1.703260182 podStartE2EDuration="1.703260182s" podCreationTimestamp="2026-03-07 01:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:00.700614132 +0000 UTC m=+1.255085506" watchObservedRunningTime="2026-03-07 01:11:00.703260182 +0000 UTC m=+1.257731549" Mar 7 01:11:00.704159 kubelet[3195]: I0307 01:11:00.703659 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-34" podStartSLOduration=1.70364651 podStartE2EDuration="1.70364651s" podCreationTimestamp="2026-03-07 01:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:00.690277579 +0000 UTC m=+1.244748953" watchObservedRunningTime="2026-03-07 01:11:00.70364651 +0000 UTC m=+1.258117881" Mar 7 01:11:00.735732 kubelet[3195]: I0307 01:11:00.735634 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-34" podStartSLOduration=1.735597822 podStartE2EDuration="1.735597822s" podCreationTimestamp="2026-03-07 01:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:00.716079575 +0000 UTC m=+1.270550948" watchObservedRunningTime="2026-03-07 01:11:00.735597822 +0000 UTC m=+1.290069196" Mar 7 01:11:02.968533 sudo[2293]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:03.045943 sshd[2290]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:03.050843 systemd-logind[1955]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:11:03.051854 systemd[1]: sshd@6-172.31.24.34:22-68.220.241.50:57440.service: Deactivated successfully. Mar 7 01:11:03.054014 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:11:03.054832 systemd[1]: session-7.scope: Consumed 6.174s CPU time, 144.7M memory peak, 0B memory swap peak. Mar 7 01:11:03.056326 systemd-logind[1955]: Removed session 7. Mar 7 01:11:03.398237 update_engine[1956]: I20260307 01:11:03.398164 1956 update_attempter.cc:509] Updating boot flags... Mar 7 01:11:03.519189 kubelet[3195]: I0307 01:11:03.519157 3195 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:11:03.520023 containerd[1977]: time="2026-03-07T01:11:03.519985846Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:11:03.520398 kubelet[3195]: I0307 01:11:03.520197 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:11:03.635373 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3281) Mar 7 01:11:03.687115 systemd[1]: Created slice kubepods-besteffort-pod9d4ba02c_5881_4bb4_bc62_ad9a6f0c89f4.slice - libcontainer container kubepods-besteffort-pod9d4ba02c_5881_4bb4_bc62_ad9a6f0c89f4.slice. Mar 7 01:11:03.705357 systemd[1]: Created slice kubepods-burstable-pod2a28c2e3_1262_4c06_995d_2412396ce53a.slice - libcontainer container kubepods-burstable-pod2a28c2e3_1262_4c06_995d_2412396ce53a.slice. Mar 7 01:11:03.710214 kubelet[3195]: I0307 01:11:03.710182 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlbv\" (UniqueName: \"kubernetes.io/projected/9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4-kube-api-access-zqlbv\") pod \"kube-proxy-bj68p\" (UID: \"9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4\") " pod="kube-system/kube-proxy-bj68p" Mar 7 01:11:03.711607 kubelet[3195]: I0307 01:11:03.711292 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-cgroup\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.711607 kubelet[3195]: I0307 01:11:03.711334 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cni-path\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.711607 kubelet[3195]: I0307 01:11:03.711467 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-kernel\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.712679 kubelet[3195]: I0307 01:11:03.711863 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4-lib-modules\") pod \"kube-proxy-bj68p\" (UID: \"9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4\") " pod="kube-system/kube-proxy-bj68p" Mar 7 01:11:03.712679 kubelet[3195]: I0307 01:11:03.712003 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-bpf-maps\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.712679 kubelet[3195]: I0307 01:11:03.712470 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-hubble-tls\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.713212 kubelet[3195]: I0307 01:11:03.713005 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbv9\" (UniqueName: \"kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-kube-api-access-fzbv9\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.713927 kubelet[3195]: I0307 01:11:03.713419 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4-kube-proxy\") pod \"kube-proxy-bj68p\" (UID: \"9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4\") " pod="kube-system/kube-proxy-bj68p" Mar 7 01:11:03.714130 kubelet[3195]: I0307 01:11:03.714087 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4-xtables-lock\") pod \"kube-proxy-bj68p\" (UID: \"9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4\") " pod="kube-system/kube-proxy-bj68p" Mar 7 01:11:03.714255 kubelet[3195]: I0307 01:11:03.714239 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-run\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.714413 kubelet[3195]: I0307 01:11:03.714316 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-hostproc\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.714413 kubelet[3195]: I0307 01:11:03.714342 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-lib-modules\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.714824 kubelet[3195]: I0307 01:11:03.714480 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-config-path\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.714824 kubelet[3195]: I0307 01:11:03.714634 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-etc-cni-netd\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.714824 kubelet[3195]: I0307 01:11:03.714671 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-xtables-lock\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.715786 kubelet[3195]: I0307 01:11:03.714908 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a28c2e3-1262-4c06-995d-2412396ce53a-clustermesh-secrets\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.715786 kubelet[3195]: I0307 01:11:03.715103 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-net\") pod \"cilium-7hs4m\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " pod="kube-system/cilium-7hs4m" Mar 7 01:11:03.926971 kubelet[3195]: E0307 01:11:03.925261 3195 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 01:11:03.926971 kubelet[3195]: E0307 01:11:03.925500 3195 projected.go:194] Error preparing data for projected volume kube-api-access-zqlbv for pod kube-system/kube-proxy-bj68p: configmap "kube-root-ca.crt" not found Mar 7 01:11:03.927442 kubelet[3195]: E0307 01:11:03.927218 3195 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4-kube-api-access-zqlbv podName:9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4 nodeName:}" failed. No retries permitted until 2026-03-07 01:11:04.425674592 +0000 UTC m=+4.980145964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zqlbv" (UniqueName: "kubernetes.io/projected/9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4-kube-api-access-zqlbv") pod "kube-proxy-bj68p" (UID: "9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4") : configmap "kube-root-ca.crt" not found Mar 7 01:11:03.935522 kubelet[3195]: E0307 01:11:03.935460 3195 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 01:11:03.935522 kubelet[3195]: E0307 01:11:03.935501 3195 projected.go:194] Error preparing data for projected volume kube-api-access-fzbv9 for pod kube-system/cilium-7hs4m: configmap "kube-root-ca.crt" not found Mar 7 01:11:03.937114 kubelet[3195]: E0307 01:11:03.935588 3195 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-kube-api-access-fzbv9 podName:2a28c2e3-1262-4c06-995d-2412396ce53a nodeName:}" failed. No retries permitted until 2026-03-07 01:11:04.43556481 +0000 UTC m=+4.990036177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fzbv9" (UniqueName: "kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-kube-api-access-fzbv9") pod "cilium-7hs4m" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a") : configmap "kube-root-ca.crt" not found Mar 7 01:11:04.042077 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3281) Mar 7 01:11:04.599023 containerd[1977]: time="2026-03-07T01:11:04.598974732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bj68p,Uid:9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:04.617758 containerd[1977]: time="2026-03-07T01:11:04.617672493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hs4m,Uid:2a28c2e3-1262-4c06-995d-2412396ce53a,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:04.642129 containerd[1977]: time="2026-03-07T01:11:04.641534064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:04.642129 containerd[1977]: time="2026-03-07T01:11:04.641609025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:04.642129 containerd[1977]: time="2026-03-07T01:11:04.641627973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:04.642129 containerd[1977]: time="2026-03-07T01:11:04.641739511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:04.692891 containerd[1977]: time="2026-03-07T01:11:04.689015147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:04.692891 containerd[1977]: time="2026-03-07T01:11:04.691689115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:04.692891 containerd[1977]: time="2026-03-07T01:11:04.691723624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:04.692891 containerd[1977]: time="2026-03-07T01:11:04.691831252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:04.693127 systemd[1]: Started cri-containerd-2040aa854a98084befc686bf5c4b009e1b3e42ad0af165d946d4efae2b4bc26f.scope - libcontainer container 2040aa854a98084befc686bf5c4b009e1b3e42ad0af165d946d4efae2b4bc26f. Mar 7 01:11:04.731096 systemd[1]: Started cri-containerd-4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88.scope - libcontainer container 4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88. Mar 7 01:11:04.767236 systemd[1]: Created slice kubepods-besteffort-pod64431922_4fe4_4f7b_9d0f_612c5d8b318c.slice - libcontainer container kubepods-besteffort-pod64431922_4fe4_4f7b_9d0f_612c5d8b318c.slice. Mar 7 01:11:04.798696 containerd[1977]: time="2026-03-07T01:11:04.798199118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bj68p,Uid:9d4ba02c-5881-4bb4-bc62-ad9a6f0c89f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2040aa854a98084befc686bf5c4b009e1b3e42ad0af165d946d4efae2b4bc26f\"" Mar 7 01:11:04.802148 containerd[1977]: time="2026-03-07T01:11:04.802026531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7hs4m,Uid:2a28c2e3-1262-4c06-995d-2412396ce53a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\"" Mar 7 01:11:04.809160 containerd[1977]: time="2026-03-07T01:11:04.809123587Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:11:04.811703 containerd[1977]: time="2026-03-07T01:11:04.811665002Z" level=info msg="CreateContainer within sandbox \"2040aa854a98084befc686bf5c4b009e1b3e42ad0af165d946d4efae2b4bc26f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:11:04.826188 kubelet[3195]: I0307 01:11:04.826098 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64431922-4fe4-4f7b-9d0f-612c5d8b318c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bw9ln\" (UID: \"64431922-4fe4-4f7b-9d0f-612c5d8b318c\") " pod="kube-system/cilium-operator-6c4d7847fc-bw9ln" Mar 7 01:11:04.826188 kubelet[3195]: I0307 01:11:04.826139 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtvt\" (UniqueName: \"kubernetes.io/projected/64431922-4fe4-4f7b-9d0f-612c5d8b318c-kube-api-access-hxtvt\") pod \"cilium-operator-6c4d7847fc-bw9ln\" (UID: \"64431922-4fe4-4f7b-9d0f-612c5d8b318c\") " pod="kube-system/cilium-operator-6c4d7847fc-bw9ln" Mar 7 01:11:04.863826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656927955.mount: Deactivated successfully. Mar 7 01:11:04.869023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156047495.mount: Deactivated successfully. Mar 7 01:11:04.869509 containerd[1977]: time="2026-03-07T01:11:04.869420968Z" level=info msg="CreateContainer within sandbox \"2040aa854a98084befc686bf5c4b009e1b3e42ad0af165d946d4efae2b4bc26f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a2e25b0c66eaf26c7fe9f57b8794654db428b20f99050215ef7a8490509f8c49\"" Mar 7 01:11:04.872149 containerd[1977]: time="2026-03-07T01:11:04.872097270Z" level=info msg="StartContainer for \"a2e25b0c66eaf26c7fe9f57b8794654db428b20f99050215ef7a8490509f8c49\"" Mar 7 01:11:04.912087 systemd[1]: Started cri-containerd-a2e25b0c66eaf26c7fe9f57b8794654db428b20f99050215ef7a8490509f8c49.scope - libcontainer container a2e25b0c66eaf26c7fe9f57b8794654db428b20f99050215ef7a8490509f8c49. Mar 7 01:11:04.951415 containerd[1977]: time="2026-03-07T01:11:04.951362515Z" level=info msg="StartContainer for \"a2e25b0c66eaf26c7fe9f57b8794654db428b20f99050215ef7a8490509f8c49\" returns successfully" Mar 7 01:11:05.075475 containerd[1977]: time="2026-03-07T01:11:05.075423195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bw9ln,Uid:64431922-4fe4-4f7b-9d0f-612c5d8b318c,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:05.102694 containerd[1977]: time="2026-03-07T01:11:05.102343316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:05.102694 containerd[1977]: time="2026-03-07T01:11:05.102399655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:05.102694 containerd[1977]: time="2026-03-07T01:11:05.102415789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:05.102973 containerd[1977]: time="2026-03-07T01:11:05.102632264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:05.125084 systemd[1]: Started cri-containerd-6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a.scope - libcontainer container 6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a. Mar 7 01:11:05.176445 containerd[1977]: time="2026-03-07T01:11:05.176400349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bw9ln,Uid:64431922-4fe4-4f7b-9d0f-612c5d8b318c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\"" Mar 7 01:11:05.691959 kubelet[3195]: I0307 01:11:05.691894 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bj68p" podStartSLOduration=2.691855428 podStartE2EDuration="2.691855428s" podCreationTimestamp="2026-03-07 01:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:05.684977814 +0000 UTC m=+6.239449185" watchObservedRunningTime="2026-03-07 01:11:05.691855428 +0000 UTC m=+6.246326803" Mar 7 01:11:05.851695 systemd[1]: run-containerd-runc-k8s.io-a2e25b0c66eaf26c7fe9f57b8794654db428b20f99050215ef7a8490509f8c49-runc.rKonbX.mount: Deactivated successfully. Mar 7 01:11:13.001100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1325378138.mount: Deactivated successfully. Mar 7 01:11:15.648539 containerd[1977]: time="2026-03-07T01:11:15.648482526Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:15.657771 containerd[1977]: time="2026-03-07T01:11:15.655261800Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:11:15.666847 containerd[1977]: time="2026-03-07T01:11:15.666799688Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:15.668567 containerd[1977]: time="2026-03-07T01:11:15.668136307Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.85876504s" Mar 7 01:11:15.668567 containerd[1977]: time="2026-03-07T01:11:15.668180067Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:11:15.669777 containerd[1977]: time="2026-03-07T01:11:15.669740806Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:11:15.678250 containerd[1977]: time="2026-03-07T01:11:15.678160812Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:11:15.787524 containerd[1977]: time="2026-03-07T01:11:15.787403079Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\"" Mar 7 01:11:15.788361 containerd[1977]: time="2026-03-07T01:11:15.788142949Z" level=info msg="StartContainer for \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\"" Mar 7 01:11:15.970066 systemd[1]: Started cri-containerd-576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605.scope - libcontainer container 576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605. Mar 7 01:11:16.011436 containerd[1977]: time="2026-03-07T01:11:16.011024915Z" level=info msg="StartContainer for \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\" returns successfully" Mar 7 01:11:16.030520 systemd[1]: cri-containerd-576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605.scope: Deactivated successfully. Mar 7 01:11:16.229781 containerd[1977]: time="2026-03-07T01:11:16.213518075Z" level=info msg="shim disconnected" id=576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605 namespace=k8s.io Mar 7 01:11:16.229781 containerd[1977]: time="2026-03-07T01:11:16.229695628Z" level=warning msg="cleaning up after shim disconnected" id=576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605 namespace=k8s.io Mar 7 01:11:16.229781 containerd[1977]: time="2026-03-07T01:11:16.229715543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:16.740660 containerd[1977]: time="2026-03-07T01:11:16.740600532Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:11:16.753678 containerd[1977]: time="2026-03-07T01:11:16.753438339Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\"" Mar 7 01:11:16.754697 containerd[1977]: time="2026-03-07T01:11:16.754510200Z" level=info msg="StartContainer for \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\"" Mar 7 01:11:16.783491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605-rootfs.mount: Deactivated successfully. Mar 7 01:11:16.804135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980991193.mount: Deactivated successfully. Mar 7 01:11:16.821133 systemd[1]: Started cri-containerd-d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79.scope - libcontainer container d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79. Mar 7 01:11:16.848369 containerd[1977]: time="2026-03-07T01:11:16.848301317Z" level=info msg="StartContainer for \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\" returns successfully" Mar 7 01:11:16.863170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:11:16.863520 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:11:16.863608 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:11:16.873018 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:11:16.873451 systemd[1]: cri-containerd-d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79.scope: Deactivated successfully. Mar 7 01:11:16.933293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79-rootfs.mount: Deactivated successfully. Mar 7 01:11:16.951749 containerd[1977]: time="2026-03-07T01:11:16.951686614Z" level=info msg="shim disconnected" id=d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79 namespace=k8s.io Mar 7 01:11:16.951749 containerd[1977]: time="2026-03-07T01:11:16.951751322Z" level=warning msg="cleaning up after shim disconnected" id=d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79 namespace=k8s.io Mar 7 01:11:16.952046 containerd[1977]: time="2026-03-07T01:11:16.951763161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:16.980939 containerd[1977]: time="2026-03-07T01:11:16.980856761Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:11:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:11:16.989947 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:11:17.714157 containerd[1977]: time="2026-03-07T01:11:17.714107331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:17.716161 containerd[1977]: time="2026-03-07T01:11:17.715964209Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:11:17.718650 containerd[1977]: time="2026-03-07T01:11:17.718603120Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:11:17.720625 containerd[1977]: time="2026-03-07T01:11:17.720589918Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.05080836s" Mar 7 01:11:17.720727 containerd[1977]: time="2026-03-07T01:11:17.720628511Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:11:17.764804 containerd[1977]: time="2026-03-07T01:11:17.764686187Z" level=info msg="CreateContainer within sandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:11:17.767786 containerd[1977]: time="2026-03-07T01:11:17.767751064Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:11:17.792023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1321523319.mount: Deactivated successfully. Mar 7 01:11:17.805716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823180172.mount: Deactivated successfully. Mar 7 01:11:17.807179 containerd[1977]: time="2026-03-07T01:11:17.806660786Z" level=info msg="CreateContainer within sandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\"" Mar 7 01:11:17.808615 containerd[1977]: time="2026-03-07T01:11:17.808577507Z" level=info msg="StartContainer for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\"" Mar 7 01:11:17.815820 containerd[1977]: time="2026-03-07T01:11:17.815420739Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\"" Mar 7 01:11:17.823938 containerd[1977]: time="2026-03-07T01:11:17.822825501Z" level=info msg="StartContainer for \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\"" Mar 7 01:11:17.854088 systemd[1]: Started cri-containerd-270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116.scope - libcontainer container 270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116. Mar 7 01:11:17.890246 systemd[1]: Started cri-containerd-a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8.scope - libcontainer container a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8. Mar 7 01:11:17.961328 containerd[1977]: time="2026-03-07T01:11:17.961281315Z" level=info msg="StartContainer for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" returns successfully" Mar 7 01:11:18.001707 systemd[1]: cri-containerd-a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8.scope: Deactivated successfully. Mar 7 01:11:18.014315 containerd[1977]: time="2026-03-07T01:11:18.014206677Z" level=info msg="StartContainer for \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\" returns successfully" Mar 7 01:11:18.131115 containerd[1977]: time="2026-03-07T01:11:18.131041791Z" level=info msg="shim disconnected" id=a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8 namespace=k8s.io Mar 7 01:11:18.131115 containerd[1977]: time="2026-03-07T01:11:18.131112687Z" level=warning msg="cleaning up after shim disconnected" id=a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8 namespace=k8s.io Mar 7 01:11:18.131115 containerd[1977]: time="2026-03-07T01:11:18.131124169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:18.762322 containerd[1977]: time="2026-03-07T01:11:18.762258091Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:11:18.793894 containerd[1977]: time="2026-03-07T01:11:18.792909461Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\"" Mar 7 01:11:18.793894 containerd[1977]: time="2026-03-07T01:11:18.793512460Z" level=info msg="StartContainer for \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\"" Mar 7 01:11:18.866434 systemd[1]: Started cri-containerd-be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1.scope - libcontainer container be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1. Mar 7 01:11:18.910574 systemd[1]: cri-containerd-be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1.scope: Deactivated successfully. Mar 7 01:11:18.915740 containerd[1977]: time="2026-03-07T01:11:18.915702348Z" level=info msg="StartContainer for \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\" returns successfully" Mar 7 01:11:18.941036 kubelet[3195]: I0307 01:11:18.935826 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bw9ln" podStartSLOduration=2.392083888 podStartE2EDuration="14.935802308s" podCreationTimestamp="2026-03-07 01:11:04 +0000 UTC" firstStartedPulling="2026-03-07 01:11:05.177675107 +0000 UTC m=+5.732146458" lastFinishedPulling="2026-03-07 01:11:17.721393509 +0000 UTC m=+18.275864878" observedRunningTime="2026-03-07 01:11:18.935191754 +0000 UTC m=+19.489663128" watchObservedRunningTime="2026-03-07 01:11:18.935802308 +0000 UTC m=+19.490273682" Mar 7 01:11:18.963461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1-rootfs.mount: Deactivated successfully. Mar 7 01:11:18.976627 containerd[1977]: time="2026-03-07T01:11:18.976562431Z" level=info msg="shim disconnected" id=be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1 namespace=k8s.io Mar 7 01:11:18.976627 containerd[1977]: time="2026-03-07T01:11:18.976626646Z" level=warning msg="cleaning up after shim disconnected" id=be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1 namespace=k8s.io Mar 7 01:11:18.979239 containerd[1977]: time="2026-03-07T01:11:18.976637100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:11:19.761263 containerd[1977]: time="2026-03-07T01:11:19.761164231Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:11:19.790639 containerd[1977]: time="2026-03-07T01:11:19.790594542Z" level=info msg="CreateContainer within sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\"" Mar 7 01:11:19.791384 containerd[1977]: time="2026-03-07T01:11:19.791349928Z" level=info msg="StartContainer for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\"" Mar 7 01:11:19.831134 systemd[1]: Started cri-containerd-449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3.scope - libcontainer container 449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3. Mar 7 01:11:19.875644 containerd[1977]: time="2026-03-07T01:11:19.875586601Z" level=info msg="StartContainer for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" returns successfully" Mar 7 01:11:20.394721 kubelet[3195]: I0307 01:11:20.394689 3195 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 01:11:20.484401 systemd[1]: Created slice kubepods-burstable-pod7d11dc4a_de85_46d9_becc_1ee2663a7621.slice - libcontainer container kubepods-burstable-pod7d11dc4a_de85_46d9_becc_1ee2663a7621.slice. Mar 7 01:11:20.493224 systemd[1]: Created slice kubepods-burstable-pod542e0422_930e_4d22_93a7_9481b0261d41.slice - libcontainer container kubepods-burstable-pod542e0422_930e_4d22_93a7_9481b0261d41.slice. Mar 7 01:11:20.543261 kubelet[3195]: I0307 01:11:20.543219 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm74w\" (UniqueName: \"kubernetes.io/projected/542e0422-930e-4d22-93a7-9481b0261d41-kube-api-access-xm74w\") pod \"coredns-674b8bbfcf-fqq97\" (UID: \"542e0422-930e-4d22-93a7-9481b0261d41\") " pod="kube-system/coredns-674b8bbfcf-fqq97" Mar 7 01:11:20.543437 kubelet[3195]: I0307 01:11:20.543272 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlnkq\" (UniqueName: \"kubernetes.io/projected/7d11dc4a-de85-46d9-becc-1ee2663a7621-kube-api-access-dlnkq\") pod \"coredns-674b8bbfcf-p7v9k\" (UID: \"7d11dc4a-de85-46d9-becc-1ee2663a7621\") " pod="kube-system/coredns-674b8bbfcf-p7v9k" Mar 7 01:11:20.543437 kubelet[3195]: I0307 01:11:20.543299 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/542e0422-930e-4d22-93a7-9481b0261d41-config-volume\") pod \"coredns-674b8bbfcf-fqq97\" (UID: \"542e0422-930e-4d22-93a7-9481b0261d41\") " pod="kube-system/coredns-674b8bbfcf-fqq97" Mar 7 01:11:20.543437 kubelet[3195]: I0307 01:11:20.543332 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d11dc4a-de85-46d9-becc-1ee2663a7621-config-volume\") pod \"coredns-674b8bbfcf-p7v9k\" (UID: \"7d11dc4a-de85-46d9-becc-1ee2663a7621\") " pod="kube-system/coredns-674b8bbfcf-p7v9k" Mar 7 01:11:20.778606 kubelet[3195]: I0307 01:11:20.778438 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7hs4m" podStartSLOduration=6.917491252 podStartE2EDuration="17.778265727s" podCreationTimestamp="2026-03-07 01:11:03 +0000 UTC" firstStartedPulling="2026-03-07 01:11:04.808747667 +0000 UTC m=+5.363219019" lastFinishedPulling="2026-03-07 01:11:15.66952214 +0000 UTC m=+16.223993494" observedRunningTime="2026-03-07 01:11:20.776890603 +0000 UTC m=+21.331361974" watchObservedRunningTime="2026-03-07 01:11:20.778265727 +0000 UTC m=+21.332737101" Mar 7 01:11:20.798134 containerd[1977]: time="2026-03-07T01:11:20.798075587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p7v9k,Uid:7d11dc4a-de85-46d9-becc-1ee2663a7621,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:20.802095 containerd[1977]: time="2026-03-07T01:11:20.801839733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqq97,Uid:542e0422-930e-4d22-93a7-9481b0261d41,Namespace:kube-system,Attempt:0,}" Mar 7 01:11:26.981878 systemd-networkd[1863]: cilium_host: Link UP Mar 7 01:11:26.983708 (udev-worker)[4211]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:26.985247 systemd-networkd[1863]: cilium_net: Link UP Mar 7 01:11:26.986066 systemd-networkd[1863]: cilium_net: Gained carrier Mar 7 01:11:26.986649 systemd-networkd[1863]: cilium_host: Gained carrier Mar 7 01:11:26.987773 (udev-worker)[4178]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:27.383037 systemd-networkd[1863]: cilium_host: Gained IPv6LL Mar 7 01:11:27.423145 systemd-networkd[1863]: cilium_net: Gained IPv6LL Mar 7 01:11:27.854669 (udev-worker)[4215]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:27.867218 systemd-networkd[1863]: cilium_vxlan: Link UP Mar 7 01:11:27.867227 systemd-networkd[1863]: cilium_vxlan: Gained carrier Mar 7 01:11:29.767159 systemd-networkd[1863]: cilium_vxlan: Gained IPv6LL Mar 7 01:11:32.604128 ntpd[1947]: Listen normally on 8 cilium_host 192.168.0.171:123 Mar 7 01:11:32.604222 ntpd[1947]: Listen normally on 9 cilium_net [fe80::6019:4aff:fecb:c56f%4]:123 Mar 7 01:11:32.604645 ntpd[1947]: 7 Mar 01:11:32 ntpd[1947]: Listen normally on 8 cilium_host 192.168.0.171:123 Mar 7 01:11:32.604645 ntpd[1947]: 7 Mar 01:11:32 ntpd[1947]: Listen normally on 9 cilium_net [fe80::6019:4aff:fecb:c56f%4]:123 Mar 7 01:11:32.604645 ntpd[1947]: 7 Mar 01:11:32 ntpd[1947]: Listen normally on 10 cilium_host [fe80::8c40:fff:fe8a:fb3b%5]:123 Mar 7 01:11:32.604645 ntpd[1947]: 7 Mar 01:11:32 ntpd[1947]: Listen normally on 11 cilium_vxlan [fe80::dc8f:4bff:fe59:4294%6]:123 Mar 7 01:11:32.604280 ntpd[1947]: Listen normally on 10 cilium_host [fe80::8c40:fff:fe8a:fb3b%5]:123 Mar 7 01:11:32.604323 ntpd[1947]: Listen normally on 11 cilium_vxlan [fe80::dc8f:4bff:fe59:4294%6]:123 Mar 7 01:11:32.676097 kernel: NET: Registered PF_ALG protocol family Mar 7 01:11:33.983471 systemd-networkd[1863]: lxc_health: Link UP Mar 7 01:11:33.990163 (udev-worker)[4539]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:33.996323 systemd-networkd[1863]: lxc_health: Gained carrier Mar 7 01:11:34.459924 systemd-networkd[1863]: lxc9c1964dd9ce4: Link UP Mar 7 01:11:34.465009 kernel: eth0: renamed from tmpadd5b Mar 7 01:11:34.468632 systemd-networkd[1863]: lxc9c1964dd9ce4: Gained carrier Mar 7 01:11:34.469889 (udev-worker)[4552]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:11:34.508440 systemd-networkd[1863]: lxcd0bc9289c2d9: Link UP Mar 7 01:11:34.516982 kernel: eth0: renamed from tmp63232 Mar 7 01:11:34.523476 systemd-networkd[1863]: lxcd0bc9289c2d9: Gained carrier Mar 7 01:11:35.209016 systemd-networkd[1863]: lxc_health: Gained IPv6LL Mar 7 01:11:35.591144 systemd-networkd[1863]: lxc9c1964dd9ce4: Gained IPv6LL Mar 7 01:11:36.295181 systemd-networkd[1863]: lxcd0bc9289c2d9: Gained IPv6LL Mar 7 01:11:37.743149 systemd[1]: Started sshd@7-172.31.24.34:22-68.220.241.50:33056.service - OpenSSH per-connection server daemon (68.220.241.50:33056). Mar 7 01:11:38.277556 sshd[4585]: Accepted publickey for core from 68.220.241.50 port 33056 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:38.281219 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:38.301312 systemd-logind[1955]: New session 8 of user core. Mar 7 01:11:38.306412 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:11:38.604266 ntpd[1947]: Listen normally on 12 lxc_health [fe80::9895:7aff:fe93:d343%8]:123 Mar 7 01:11:38.604707 ntpd[1947]: 7 Mar 01:11:38 ntpd[1947]: Listen normally on 12 lxc_health [fe80::9895:7aff:fe93:d343%8]:123 Mar 7 01:11:38.604707 ntpd[1947]: 7 Mar 01:11:38 ntpd[1947]: Listen normally on 13 lxc9c1964dd9ce4 [fe80::d889:73ff:fe68:bec9%10]:123 Mar 7 01:11:38.604707 ntpd[1947]: 7 Mar 01:11:38 ntpd[1947]: Listen normally on 14 lxcd0bc9289c2d9 [fe80::c81f:37ff:fef0:facc%12]:123 Mar 7 01:11:38.604365 ntpd[1947]: Listen normally on 13 lxc9c1964dd9ce4 [fe80::d889:73ff:fe68:bec9%10]:123 Mar 7 01:11:38.604410 ntpd[1947]: Listen normally on 14 lxcd0bc9289c2d9 [fe80::c81f:37ff:fef0:facc%12]:123 Mar 7 01:11:39.257960 containerd[1977]: time="2026-03-07T01:11:39.257585504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:39.257960 containerd[1977]: time="2026-03-07T01:11:39.257673248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:39.257960 containerd[1977]: time="2026-03-07T01:11:39.257698490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:39.257960 containerd[1977]: time="2026-03-07T01:11:39.257808163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:39.314269 systemd[1]: Started cri-containerd-add5bac1451ec9e6f7861b3e50ab91473410aafe5442944b4ad271f31a5db030.scope - libcontainer container add5bac1451ec9e6f7861b3e50ab91473410aafe5442944b4ad271f31a5db030. Mar 7 01:11:39.400900 containerd[1977]: time="2026-03-07T01:11:39.398098092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:11:39.400900 containerd[1977]: time="2026-03-07T01:11:39.398173354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:11:39.400900 containerd[1977]: time="2026-03-07T01:11:39.398216460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:39.400900 containerd[1977]: time="2026-03-07T01:11:39.398370286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:11:39.467087 systemd[1]: Started cri-containerd-6323293fb1bda2e7cb144b516e31487f814bcbd5b2dadd1dd80e7d9bd8ec94f6.scope - libcontainer container 6323293fb1bda2e7cb144b516e31487f814bcbd5b2dadd1dd80e7d9bd8ec94f6. Mar 7 01:11:39.606334 containerd[1977]: time="2026-03-07T01:11:39.606289243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p7v9k,Uid:7d11dc4a-de85-46d9-becc-1ee2663a7621,Namespace:kube-system,Attempt:0,} returns sandbox id \"add5bac1451ec9e6f7861b3e50ab91473410aafe5442944b4ad271f31a5db030\"" Mar 7 01:11:39.618190 containerd[1977]: time="2026-03-07T01:11:39.618143247Z" level=info msg="CreateContainer within sandbox \"add5bac1451ec9e6f7861b3e50ab91473410aafe5442944b4ad271f31a5db030\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:11:39.677095 containerd[1977]: time="2026-03-07T01:11:39.677043919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqq97,Uid:542e0422-930e-4d22-93a7-9481b0261d41,Namespace:kube-system,Attempt:0,} returns sandbox id \"6323293fb1bda2e7cb144b516e31487f814bcbd5b2dadd1dd80e7d9bd8ec94f6\"" Mar 7 01:11:39.691799 containerd[1977]: time="2026-03-07T01:11:39.691749245Z" level=info msg="CreateContainer within sandbox \"6323293fb1bda2e7cb144b516e31487f814bcbd5b2dadd1dd80e7d9bd8ec94f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:11:39.761615 sshd[4585]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:39.765931 systemd[1]: sshd@7-172.31.24.34:22-68.220.241.50:33056.service: Deactivated successfully. Mar 7 01:11:39.768359 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:11:39.770734 systemd-logind[1955]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:11:39.772357 systemd-logind[1955]: Removed session 8. Mar 7 01:11:39.933724 containerd[1977]: time="2026-03-07T01:11:39.933681369Z" level=info msg="CreateContainer within sandbox \"add5bac1451ec9e6f7861b3e50ab91473410aafe5442944b4ad271f31a5db030\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ed420d68d8c0dd86298335ba1f41c3c888c5e18a7feb2ee06d1da028e805062\"" Mar 7 01:11:39.934986 containerd[1977]: time="2026-03-07T01:11:39.934195336Z" level=info msg="StartContainer for \"8ed420d68d8c0dd86298335ba1f41c3c888c5e18a7feb2ee06d1da028e805062\"" Mar 7 01:11:39.937753 containerd[1977]: time="2026-03-07T01:11:39.937711738Z" level=info msg="CreateContainer within sandbox \"6323293fb1bda2e7cb144b516e31487f814bcbd5b2dadd1dd80e7d9bd8ec94f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48543f1f040182e7dba114a0bbd15eeba4389e40c5a2a39b1f48eecf7725b327\"" Mar 7 01:11:39.940551 containerd[1977]: time="2026-03-07T01:11:39.940478012Z" level=info msg="StartContainer for \"48543f1f040182e7dba114a0bbd15eeba4389e40c5a2a39b1f48eecf7725b327\"" Mar 7 01:11:39.986160 systemd[1]: Started cri-containerd-48543f1f040182e7dba114a0bbd15eeba4389e40c5a2a39b1f48eecf7725b327.scope - libcontainer container 48543f1f040182e7dba114a0bbd15eeba4389e40c5a2a39b1f48eecf7725b327. Mar 7 01:11:39.990672 systemd[1]: Started cri-containerd-8ed420d68d8c0dd86298335ba1f41c3c888c5e18a7feb2ee06d1da028e805062.scope - libcontainer container 8ed420d68d8c0dd86298335ba1f41c3c888c5e18a7feb2ee06d1da028e805062. Mar 7 01:11:40.130076 containerd[1977]: time="2026-03-07T01:11:40.129138849Z" level=info msg="StartContainer for \"48543f1f040182e7dba114a0bbd15eeba4389e40c5a2a39b1f48eecf7725b327\" returns successfully" Mar 7 01:11:40.136036 containerd[1977]: time="2026-03-07T01:11:40.135980938Z" level=info msg="StartContainer for \"8ed420d68d8c0dd86298335ba1f41c3c888c5e18a7feb2ee06d1da028e805062\" returns successfully" Mar 7 01:11:40.269898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428112809.mount: Deactivated successfully. Mar 7 01:11:40.828857 kubelet[3195]: I0307 01:11:40.828742 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fqq97" podStartSLOduration=36.828719296 podStartE2EDuration="36.828719296s" podCreationTimestamp="2026-03-07 01:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:40.828300052 +0000 UTC m=+41.382771426" watchObservedRunningTime="2026-03-07 01:11:40.828719296 +0000 UTC m=+41.383190668" Mar 7 01:11:44.851239 systemd[1]: Started sshd@8-172.31.24.34:22-68.220.241.50:53648.service - OpenSSH per-connection server daemon (68.220.241.50:53648). Mar 7 01:11:45.340751 sshd[4760]: Accepted publickey for core from 68.220.241.50 port 53648 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:45.342609 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:45.348792 systemd-logind[1955]: New session 9 of user core. Mar 7 01:11:45.354121 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:11:45.779001 sshd[4760]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:45.785697 systemd[1]: sshd@8-172.31.24.34:22-68.220.241.50:53648.service: Deactivated successfully. Mar 7 01:11:45.788010 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:11:45.789556 systemd-logind[1955]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:11:45.791303 systemd-logind[1955]: Removed session 9. Mar 7 01:11:50.871240 systemd[1]: Started sshd@9-172.31.24.34:22-68.220.241.50:53664.service - OpenSSH per-connection server daemon (68.220.241.50:53664). Mar 7 01:11:51.391925 sshd[4785]: Accepted publickey for core from 68.220.241.50 port 53664 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:51.393061 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:51.398326 systemd-logind[1955]: New session 10 of user core. Mar 7 01:11:51.402059 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:11:51.845292 sshd[4785]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:51.849552 systemd-logind[1955]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:11:51.850367 systemd[1]: sshd@9-172.31.24.34:22-68.220.241.50:53664.service: Deactivated successfully. Mar 7 01:11:51.852583 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:11:51.854001 systemd-logind[1955]: Removed session 10. Mar 7 01:11:52.872686 kubelet[3195]: I0307 01:11:52.872582 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p7v9k" podStartSLOduration=48.872560011 podStartE2EDuration="48.872560011s" podCreationTimestamp="2026-03-07 01:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:11:40.845493999 +0000 UTC m=+41.399965373" watchObservedRunningTime="2026-03-07 01:11:52.872560011 +0000 UTC m=+53.427031386" Mar 7 01:11:56.936181 systemd[1]: Started sshd@10-172.31.24.34:22-68.220.241.50:54638.service - OpenSSH per-connection server daemon (68.220.241.50:54638). Mar 7 01:11:57.432844 sshd[4807]: Accepted publickey for core from 68.220.241.50 port 54638 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:57.434498 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:57.439708 systemd-logind[1955]: New session 11 of user core. Mar 7 01:11:57.446139 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:11:57.873119 sshd[4807]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:57.879826 systemd[1]: sshd@10-172.31.24.34:22-68.220.241.50:54638.service: Deactivated successfully. Mar 7 01:11:57.881862 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:11:57.883007 systemd-logind[1955]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:11:57.884117 systemd-logind[1955]: Removed session 11. Mar 7 01:11:57.964537 systemd[1]: Started sshd@11-172.31.24.34:22-68.220.241.50:54642.service - OpenSSH per-connection server daemon (68.220.241.50:54642). Mar 7 01:11:58.465927 sshd[4821]: Accepted publickey for core from 68.220.241.50 port 54642 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:58.467610 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:58.472931 systemd-logind[1955]: New session 12 of user core. Mar 7 01:11:58.482213 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:11:59.147569 sshd[4821]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:59.152326 systemd[1]: sshd@11-172.31.24.34:22-68.220.241.50:54642.service: Deactivated successfully. Mar 7 01:11:59.154842 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:11:59.156022 systemd-logind[1955]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:11:59.157333 systemd-logind[1955]: Removed session 12. Mar 7 01:11:59.242293 systemd[1]: Started sshd@12-172.31.24.34:22-68.220.241.50:54644.service - OpenSSH per-connection server daemon (68.220.241.50:54644). Mar 7 01:11:59.732028 sshd[4831]: Accepted publickey for core from 68.220.241.50 port 54644 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:11:59.733748 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:59.738850 systemd-logind[1955]: New session 13 of user core. Mar 7 01:11:59.741076 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:12:00.177382 sshd[4831]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:00.181933 systemd-logind[1955]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:12:00.182686 systemd[1]: sshd@12-172.31.24.34:22-68.220.241.50:54644.service: Deactivated successfully. Mar 7 01:12:00.185388 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:12:00.186801 systemd-logind[1955]: Removed session 13. Mar 7 01:12:05.263721 systemd[1]: Started sshd@13-172.31.24.34:22-68.220.241.50:36398.service - OpenSSH per-connection server daemon (68.220.241.50:36398). Mar 7 01:12:05.748922 sshd[4847]: Accepted publickey for core from 68.220.241.50 port 36398 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:05.750008 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:05.754612 systemd-logind[1955]: New session 14 of user core. Mar 7 01:12:05.760076 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:12:06.162543 sshd[4847]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:06.167340 systemd[1]: sshd@13-172.31.24.34:22-68.220.241.50:36398.service: Deactivated successfully. Mar 7 01:12:06.170754 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:12:06.171531 systemd-logind[1955]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:12:06.172783 systemd-logind[1955]: Removed session 14. Mar 7 01:12:11.256236 systemd[1]: Started sshd@14-172.31.24.34:22-68.220.241.50:36404.service - OpenSSH per-connection server daemon (68.220.241.50:36404). Mar 7 01:12:11.737110 sshd[4861]: Accepted publickey for core from 68.220.241.50 port 36404 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:11.738753 sshd[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:11.744214 systemd-logind[1955]: New session 15 of user core. Mar 7 01:12:11.753144 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:12:12.153699 sshd[4861]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:12.158289 systemd[1]: sshd@14-172.31.24.34:22-68.220.241.50:36404.service: Deactivated successfully. Mar 7 01:12:12.160381 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:12:12.161613 systemd-logind[1955]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:12:12.163307 systemd-logind[1955]: Removed session 15. Mar 7 01:12:12.241694 systemd[1]: Started sshd@15-172.31.24.34:22-68.220.241.50:47082.service - OpenSSH per-connection server daemon (68.220.241.50:47082). Mar 7 01:12:12.727487 sshd[4874]: Accepted publickey for core from 68.220.241.50 port 47082 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:12.728143 sshd[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:12.733387 systemd-logind[1955]: New session 16 of user core. Mar 7 01:12:12.740111 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:12:19.444839 sshd[4874]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:19.449432 systemd-logind[1955]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:12:19.450237 systemd[1]: sshd@15-172.31.24.34:22-68.220.241.50:47082.service: Deactivated successfully. Mar 7 01:12:19.453254 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:12:19.454400 systemd-logind[1955]: Removed session 16. Mar 7 01:12:19.536234 systemd[1]: Started sshd@16-172.31.24.34:22-68.220.241.50:47090.service - OpenSSH per-connection server daemon (68.220.241.50:47090). Mar 7 01:12:20.029806 sshd[4885]: Accepted publickey for core from 68.220.241.50 port 47090 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:20.030574 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:20.035305 systemd-logind[1955]: New session 17 of user core. Mar 7 01:12:20.042186 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:12:21.564396 sshd[4885]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:21.576005 systemd-logind[1955]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:12:21.576413 systemd[1]: sshd@16-172.31.24.34:22-68.220.241.50:47090.service: Deactivated successfully. Mar 7 01:12:21.578754 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:12:21.580449 systemd-logind[1955]: Removed session 17. Mar 7 01:12:21.663269 systemd[1]: Started sshd@17-172.31.24.34:22-68.220.241.50:47094.service - OpenSSH per-connection server daemon (68.220.241.50:47094). Mar 7 01:12:22.156587 sshd[4905]: Accepted publickey for core from 68.220.241.50 port 47094 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:22.158399 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:22.164044 systemd-logind[1955]: New session 18 of user core. Mar 7 01:12:22.170155 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:12:22.739024 sshd[4905]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:22.743644 systemd[1]: sshd@17-172.31.24.34:22-68.220.241.50:47094.service: Deactivated successfully. Mar 7 01:12:22.748145 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:12:22.750066 systemd-logind[1955]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:12:22.754615 systemd-logind[1955]: Removed session 18. Mar 7 01:12:22.830260 systemd[1]: Started sshd@18-172.31.24.34:22-68.220.241.50:47706.service - OpenSSH per-connection server daemon (68.220.241.50:47706). Mar 7 01:12:23.317377 sshd[4917]: Accepted publickey for core from 68.220.241.50 port 47706 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:23.319091 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:23.323354 systemd-logind[1955]: New session 19 of user core. Mar 7 01:12:23.332142 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:12:23.743613 sshd[4917]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:23.747956 systemd-logind[1955]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:12:23.749105 systemd[1]: sshd@18-172.31.24.34:22-68.220.241.50:47706.service: Deactivated successfully. Mar 7 01:12:23.751255 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:12:23.752339 systemd-logind[1955]: Removed session 19. Mar 7 01:12:28.834545 systemd[1]: Started sshd@19-172.31.24.34:22-68.220.241.50:47722.service - OpenSSH per-connection server daemon (68.220.241.50:47722). Mar 7 01:12:29.326413 sshd[4931]: Accepted publickey for core from 68.220.241.50 port 47722 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:29.327949 sshd[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:29.333069 systemd-logind[1955]: New session 20 of user core. Mar 7 01:12:29.338159 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:12:29.771799 sshd[4931]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:29.776210 systemd-logind[1955]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:12:29.777030 systemd[1]: sshd@19-172.31.24.34:22-68.220.241.50:47722.service: Deactivated successfully. Mar 7 01:12:29.779349 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:12:29.780461 systemd-logind[1955]: Removed session 20. Mar 7 01:12:34.861726 systemd[1]: Started sshd@20-172.31.24.34:22-68.220.241.50:38176.service - OpenSSH per-connection server daemon (68.220.241.50:38176). Mar 7 01:12:35.366553 sshd[4944]: Accepted publickey for core from 68.220.241.50 port 38176 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:35.368381 sshd[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:35.374507 systemd-logind[1955]: New session 21 of user core. Mar 7 01:12:35.382096 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:12:35.796676 sshd[4944]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:35.801156 systemd-logind[1955]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:12:35.801780 systemd[1]: sshd@20-172.31.24.34:22-68.220.241.50:38176.service: Deactivated successfully. Mar 7 01:12:35.804259 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:12:35.805421 systemd-logind[1955]: Removed session 21. Mar 7 01:12:35.889243 systemd[1]: Started sshd@21-172.31.24.34:22-68.220.241.50:38190.service - OpenSSH per-connection server daemon (68.220.241.50:38190). Mar 7 01:12:36.380422 sshd[4957]: Accepted publickey for core from 68.220.241.50 port 38190 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:36.381074 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:36.385645 systemd-logind[1955]: New session 22 of user core. Mar 7 01:12:36.392110 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:12:39.598892 containerd[1977]: time="2026-03-07T01:12:39.598470097Z" level=info msg="StopContainer for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" with timeout 30 (s)" Mar 7 01:12:39.604531 containerd[1977]: time="2026-03-07T01:12:39.604024026Z" level=info msg="Stop container \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" with signal terminated" Mar 7 01:12:39.628775 systemd[1]: run-containerd-runc-k8s.io-449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3-runc.2Hatm3.mount: Deactivated successfully. Mar 7 01:12:39.630594 systemd[1]: cri-containerd-270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116.scope: Deactivated successfully. Mar 7 01:12:39.660286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116-rootfs.mount: Deactivated successfully. Mar 7 01:12:39.672081 containerd[1977]: time="2026-03-07T01:12:39.672008292Z" level=info msg="shim disconnected" id=270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116 namespace=k8s.io Mar 7 01:12:39.672081 containerd[1977]: time="2026-03-07T01:12:39.672076584Z" level=warning msg="cleaning up after shim disconnected" id=270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116 namespace=k8s.io Mar 7 01:12:39.672081 containerd[1977]: time="2026-03-07T01:12:39.672089607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:39.703058 containerd[1977]: time="2026-03-07T01:12:39.702533208Z" level=info msg="StopContainer for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" returns successfully" Mar 7 01:12:39.703628 containerd[1977]: time="2026-03-07T01:12:39.703363210Z" level=info msg="StopPodSandbox for \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\"" Mar 7 01:12:39.703628 containerd[1977]: time="2026-03-07T01:12:39.703411762Z" level=info msg="Container to stop \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:12:39.709520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a-shm.mount: Deactivated successfully. Mar 7 01:12:39.719217 containerd[1977]: time="2026-03-07T01:12:39.719094772Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:12:39.719741 containerd[1977]: time="2026-03-07T01:12:39.719589895Z" level=info msg="StopContainer for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" with timeout 2 (s)" Mar 7 01:12:39.719966 containerd[1977]: time="2026-03-07T01:12:39.719941438Z" level=info msg="Stop container \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" with signal terminated" Mar 7 01:12:39.723206 systemd[1]: cri-containerd-6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a.scope: Deactivated successfully. Mar 7 01:12:39.734892 systemd-networkd[1863]: lxc_health: Link DOWN Mar 7 01:12:39.734903 systemd-networkd[1863]: lxc_health: Lost carrier Mar 7 01:12:39.755015 systemd[1]: cri-containerd-449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3.scope: Deactivated successfully. Mar 7 01:12:39.755310 systemd[1]: cri-containerd-449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3.scope: Consumed 8.270s CPU time. Mar 7 01:12:39.774661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a-rootfs.mount: Deactivated successfully. Mar 7 01:12:39.791525 containerd[1977]: time="2026-03-07T01:12:39.790989308Z" level=info msg="shim disconnected" id=449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3 namespace=k8s.io Mar 7 01:12:39.791525 containerd[1977]: time="2026-03-07T01:12:39.791070754Z" level=warning msg="cleaning up after shim disconnected" id=449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3 namespace=k8s.io Mar 7 01:12:39.791525 containerd[1977]: time="2026-03-07T01:12:39.791083748Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:39.792674 containerd[1977]: time="2026-03-07T01:12:39.792402470Z" level=info msg="shim disconnected" id=6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a namespace=k8s.io Mar 7 01:12:39.792674 containerd[1977]: time="2026-03-07T01:12:39.792516919Z" level=warning msg="cleaning up after shim disconnected" id=6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a namespace=k8s.io Mar 7 01:12:39.792674 containerd[1977]: time="2026-03-07T01:12:39.792535158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:39.808917 kubelet[3195]: E0307 01:12:39.808802 3195 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:12:39.824340 containerd[1977]: time="2026-03-07T01:12:39.824293907Z" level=info msg="StopContainer for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" returns successfully" Mar 7 01:12:39.825663 containerd[1977]: time="2026-03-07T01:12:39.825630377Z" level=info msg="StopPodSandbox for \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\"" Mar 7 01:12:39.825815 containerd[1977]: time="2026-03-07T01:12:39.825676409Z" level=info msg="Container to stop \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:12:39.825815 containerd[1977]: time="2026-03-07T01:12:39.825692208Z" level=info msg="Container to stop \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:12:39.825815 containerd[1977]: time="2026-03-07T01:12:39.825706923Z" level=info msg="Container to stop \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:12:39.825815 containerd[1977]: time="2026-03-07T01:12:39.825720942Z" level=info msg="Container to stop \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:12:39.825815 containerd[1977]: time="2026-03-07T01:12:39.825735754Z" level=info msg="Container to stop \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:12:39.834388 containerd[1977]: time="2026-03-07T01:12:39.833923462Z" level=info msg="TearDown network for sandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" successfully" Mar 7 01:12:39.834388 containerd[1977]: time="2026-03-07T01:12:39.833991012Z" level=info msg="StopPodSandbox for \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" returns successfully" Mar 7 01:12:39.838124 systemd[1]: cri-containerd-4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88.scope: Deactivated successfully. Mar 7 01:12:39.890325 containerd[1977]: time="2026-03-07T01:12:39.890018584Z" level=info msg="shim disconnected" id=4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88 namespace=k8s.io Mar 7 01:12:39.890325 containerd[1977]: time="2026-03-07T01:12:39.890083777Z" level=warning msg="cleaning up after shim disconnected" id=4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88 namespace=k8s.io Mar 7 01:12:39.890325 containerd[1977]: time="2026-03-07T01:12:39.890095696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:39.940935 containerd[1977]: time="2026-03-07T01:12:39.940353209Z" level=info msg="TearDown network for sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" successfully" Mar 7 01:12:39.941385 containerd[1977]: time="2026-03-07T01:12:39.941359897Z" level=info msg="StopPodSandbox for \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" returns successfully" Mar 7 01:12:39.942696 kubelet[3195]: I0307 01:12:39.942659 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxtvt\" (UniqueName: \"kubernetes.io/projected/64431922-4fe4-4f7b-9d0f-612c5d8b318c-kube-api-access-hxtvt\") pod \"64431922-4fe4-4f7b-9d0f-612c5d8b318c\" (UID: \"64431922-4fe4-4f7b-9d0f-612c5d8b318c\") " Mar 7 01:12:39.942825 kubelet[3195]: I0307 01:12:39.942741 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64431922-4fe4-4f7b-9d0f-612c5d8b318c-cilium-config-path\") pod \"64431922-4fe4-4f7b-9d0f-612c5d8b318c\" (UID: \"64431922-4fe4-4f7b-9d0f-612c5d8b318c\") " Mar 7 01:12:39.972758 kubelet[3195]: I0307 01:12:39.968972 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64431922-4fe4-4f7b-9d0f-612c5d8b318c-kube-api-access-hxtvt" (OuterVolumeSpecName: "kube-api-access-hxtvt") pod "64431922-4fe4-4f7b-9d0f-612c5d8b318c" (UID: "64431922-4fe4-4f7b-9d0f-612c5d8b318c"). InnerVolumeSpecName "kube-api-access-hxtvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:12:39.976694 kubelet[3195]: I0307 01:12:39.968897 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64431922-4fe4-4f7b-9d0f-612c5d8b318c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "64431922-4fe4-4f7b-9d0f-612c5d8b318c" (UID: "64431922-4fe4-4f7b-9d0f-612c5d8b318c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:12:39.996230 kubelet[3195]: I0307 01:12:39.996187 3195 scope.go:117] "RemoveContainer" containerID="270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116" Mar 7 01:12:40.003064 systemd[1]: Removed slice kubepods-besteffort-pod64431922_4fe4_4f7b_9d0f_612c5d8b318c.slice - libcontainer container kubepods-besteffort-pod64431922_4fe4_4f7b_9d0f_612c5d8b318c.slice. Mar 7 01:12:40.013753 containerd[1977]: time="2026-03-07T01:12:40.012719089Z" level=info msg="RemoveContainer for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\"" Mar 7 01:12:40.020682 containerd[1977]: time="2026-03-07T01:12:40.020623458Z" level=info msg="RemoveContainer for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" returns successfully" Mar 7 01:12:40.020999 kubelet[3195]: I0307 01:12:40.020971 3195 scope.go:117] "RemoveContainer" containerID="270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116" Mar 7 01:12:40.043939 kubelet[3195]: I0307 01:12:40.043431 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-cgroup\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.043939 kubelet[3195]: I0307 01:12:40.043481 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzbv9\" (UniqueName: \"kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-kube-api-access-fzbv9\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.043939 kubelet[3195]: I0307 01:12:40.043508 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-hostproc\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.043939 kubelet[3195]: I0307 01:12:40.043537 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-config-path\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.043939 kubelet[3195]: I0307 01:12:40.043559 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-net\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.043939 kubelet[3195]: I0307 01:12:40.043586 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-bpf-maps\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044444 kubelet[3195]: I0307 01:12:40.043617 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-hubble-tls\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044444 kubelet[3195]: I0307 01:12:40.043640 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-lib-modules\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044444 kubelet[3195]: I0307 01:12:40.043668 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a28c2e3-1262-4c06-995d-2412396ce53a-clustermesh-secrets\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044444 kubelet[3195]: I0307 01:12:40.043693 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-kernel\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044444 kubelet[3195]: I0307 01:12:40.043718 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-etc-cni-netd\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044444 kubelet[3195]: I0307 01:12:40.043741 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-run\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044726 kubelet[3195]: I0307 01:12:40.043764 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cni-path\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.044726 kubelet[3195]: I0307 01:12:40.043786 3195 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-xtables-lock\") pod \"2a28c2e3-1262-4c06-995d-2412396ce53a\" (UID: \"2a28c2e3-1262-4c06-995d-2412396ce53a\") " Mar 7 01:12:40.049333 kubelet[3195]: I0307 01:12:40.046816 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxtvt\" (UniqueName: \"kubernetes.io/projected/64431922-4fe4-4f7b-9d0f-612c5d8b318c-kube-api-access-hxtvt\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.049333 kubelet[3195]: I0307 01:12:40.048087 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/64431922-4fe4-4f7b-9d0f-612c5d8b318c-cilium-config-path\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.049333 kubelet[3195]: I0307 01:12:40.048147 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.049333 kubelet[3195]: I0307 01:12:40.048196 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.054145 kubelet[3195]: I0307 01:12:40.054103 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.060826 kubelet[3195]: I0307 01:12:40.060080 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.060826 kubelet[3195]: I0307 01:12:40.060142 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.060826 kubelet[3195]: I0307 01:12:40.060167 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.060826 kubelet[3195]: I0307 01:12:40.060188 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cni-path" (OuterVolumeSpecName: "cni-path") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.060826 kubelet[3195]: I0307 01:12:40.060299 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a28c2e3-1262-4c06-995d-2412396ce53a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:12:40.063889 kubelet[3195]: I0307 01:12:40.063262 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-kube-api-access-fzbv9" (OuterVolumeSpecName: "kube-api-access-fzbv9") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "kube-api-access-fzbv9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:12:40.063889 kubelet[3195]: I0307 01:12:40.063318 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-hostproc" (OuterVolumeSpecName: "hostproc") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.068417 kubelet[3195]: I0307 01:12:40.067926 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:12:40.068417 kubelet[3195]: I0307 01:12:40.068002 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.068417 kubelet[3195]: I0307 01:12:40.068026 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:12:40.068417 kubelet[3195]: I0307 01:12:40.068104 3195 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2a28c2e3-1262-4c06-995d-2412396ce53a" (UID: "2a28c2e3-1262-4c06-995d-2412396ce53a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:12:40.070775 containerd[1977]: time="2026-03-07T01:12:40.038481924Z" level=error msg="ContainerStatus for \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\": not found" Mar 7 01:12:40.088261 kubelet[3195]: E0307 01:12:40.088156 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\": not found" containerID="270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116" Mar 7 01:12:40.112056 kubelet[3195]: I0307 01:12:40.088265 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116"} err="failed to get container status \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\": rpc error: code = NotFound desc = an error occurred when try to find container \"270dd1fa20c4f8400acd58af82b295246445a59f448bcac0edff072e06f1c116\": not found" Mar 7 01:12:40.112056 kubelet[3195]: I0307 01:12:40.112058 3195 scope.go:117] "RemoveContainer" containerID="449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3" Mar 7 01:12:40.113667 containerd[1977]: time="2026-03-07T01:12:40.113420016Z" level=info msg="RemoveContainer for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\"" Mar 7 01:12:40.118787 containerd[1977]: time="2026-03-07T01:12:40.118725570Z" level=info msg="RemoveContainer for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" returns successfully" Mar 7 01:12:40.119002 kubelet[3195]: I0307 01:12:40.118988 3195 scope.go:117] "RemoveContainer" containerID="be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1" Mar 7 01:12:40.120346 containerd[1977]: time="2026-03-07T01:12:40.120308344Z" level=info msg="RemoveContainer for \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\"" Mar 7 01:12:40.125719 containerd[1977]: time="2026-03-07T01:12:40.125688639Z" level=info msg="RemoveContainer for \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\" returns successfully" Mar 7 01:12:40.126466 kubelet[3195]: I0307 01:12:40.126141 3195 scope.go:117] "RemoveContainer" containerID="a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8" Mar 7 01:12:40.127543 containerd[1977]: time="2026-03-07T01:12:40.127298595Z" level=info msg="RemoveContainer for \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\"" Mar 7 01:12:40.132456 containerd[1977]: time="2026-03-07T01:12:40.132418917Z" level=info msg="RemoveContainer for \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\" returns successfully" Mar 7 01:12:40.132684 kubelet[3195]: I0307 01:12:40.132665 3195 scope.go:117] "RemoveContainer" containerID="d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79" Mar 7 01:12:40.133815 containerd[1977]: time="2026-03-07T01:12:40.133745619Z" level=info msg="RemoveContainer for \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\"" Mar 7 01:12:40.138845 containerd[1977]: time="2026-03-07T01:12:40.138804019Z" level=info msg="RemoveContainer for \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\" returns successfully" Mar 7 01:12:40.139080 kubelet[3195]: I0307 01:12:40.139046 3195 scope.go:117] "RemoveContainer" containerID="576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605" Mar 7 01:12:40.140206 containerd[1977]: time="2026-03-07T01:12:40.140177616Z" level=info msg="RemoveContainer for \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\"" Mar 7 01:12:40.145314 containerd[1977]: time="2026-03-07T01:12:40.145200431Z" level=info msg="RemoveContainer for \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\" returns successfully" Mar 7 01:12:40.147179 kubelet[3195]: I0307 01:12:40.147072 3195 scope.go:117] "RemoveContainer" containerID="449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3" Mar 7 01:12:40.147381 containerd[1977]: time="2026-03-07T01:12:40.147339970Z" level=error msg="ContainerStatus for \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\": not found" Mar 7 01:12:40.148242 kubelet[3195]: E0307 01:12:40.148215 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\": not found" containerID="449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3" Mar 7 01:12:40.148311 kubelet[3195]: I0307 01:12:40.148253 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3"} err="failed to get container status \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3\": not found" Mar 7 01:12:40.148311 kubelet[3195]: I0307 01:12:40.148279 3195 scope.go:117] "RemoveContainer" containerID="be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148463 3195 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-kernel\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148484 3195 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-etc-cni-netd\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148508 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-run\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148521 3195 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cni-path\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148534 3195 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-xtables-lock\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148550 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-cgroup\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148564 3195 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzbv9\" (UniqueName: \"kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-kube-api-access-fzbv9\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.148675 kubelet[3195]: I0307 01:12:40.148576 3195 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-hostproc\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149105 containerd[1977]: time="2026-03-07T01:12:40.148609746Z" level=error msg="ContainerStatus for \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\": not found" Mar 7 01:12:40.149105 containerd[1977]: time="2026-03-07T01:12:40.148991884Z" level=error msg="ContainerStatus for \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\": not found" Mar 7 01:12:40.149201 kubelet[3195]: I0307 01:12:40.148590 3195 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a28c2e3-1262-4c06-995d-2412396ce53a-cilium-config-path\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149201 kubelet[3195]: I0307 01:12:40.148604 3195 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-host-proc-sys-net\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149201 kubelet[3195]: I0307 01:12:40.148615 3195 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-bpf-maps\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149201 kubelet[3195]: I0307 01:12:40.148627 3195 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a28c2e3-1262-4c06-995d-2412396ce53a-hubble-tls\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149201 kubelet[3195]: I0307 01:12:40.148643 3195 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a28c2e3-1262-4c06-995d-2412396ce53a-lib-modules\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149201 kubelet[3195]: I0307 01:12:40.148656 3195 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a28c2e3-1262-4c06-995d-2412396ce53a-clustermesh-secrets\") on node \"ip-172-31-24-34\" DevicePath \"\"" Mar 7 01:12:40.149201 kubelet[3195]: E0307 01:12:40.148767 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\": not found" containerID="be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1" Mar 7 01:12:40.149565 kubelet[3195]: I0307 01:12:40.148791 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1"} err="failed to get container status \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"be0156e94036461760cb012612796546e39316976e24cef7f2aa194a22e28ed1\": not found" Mar 7 01:12:40.149565 kubelet[3195]: I0307 01:12:40.148815 3195 scope.go:117] "RemoveContainer" containerID="a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8" Mar 7 01:12:40.149565 kubelet[3195]: E0307 01:12:40.149101 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\": not found" containerID="a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8" Mar 7 01:12:40.149565 kubelet[3195]: I0307 01:12:40.149124 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8"} err="failed to get container status \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\": rpc error: code = NotFound desc = an error occurred when try to find container \"a57310c59b85e31e3bed121b8bb1c2d8402f2e1ade6976f2219b37d0c611aca8\": not found" Mar 7 01:12:40.149565 kubelet[3195]: I0307 01:12:40.149143 3195 scope.go:117] "RemoveContainer" containerID="d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79" Mar 7 01:12:40.149565 kubelet[3195]: E0307 01:12:40.149514 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\": not found" containerID="d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79" Mar 7 01:12:40.150017 containerd[1977]: time="2026-03-07T01:12:40.149321936Z" level=error msg="ContainerStatus for \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\": not found" Mar 7 01:12:40.150017 containerd[1977]: time="2026-03-07T01:12:40.149737515Z" level=error msg="ContainerStatus for \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\": not found" Mar 7 01:12:40.150075 kubelet[3195]: I0307 01:12:40.149541 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79"} err="failed to get container status \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6b9c8bf005515f344871f585879a0c323c488e9704f050877199775b7466c79\": not found" Mar 7 01:12:40.150075 kubelet[3195]: I0307 01:12:40.149564 3195 scope.go:117] "RemoveContainer" containerID="576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605" Mar 7 01:12:40.150075 kubelet[3195]: E0307 01:12:40.149854 3195 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\": not found" containerID="576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605" Mar 7 01:12:40.150075 kubelet[3195]: I0307 01:12:40.149902 3195 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605"} err="failed to get container status \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\": rpc error: code = NotFound desc = an error occurred when try to find container \"576c2b4dfa9b2acb53639b3cb64168ddfa88e7d5565cc6087eb7863fa7bbb605\": not found" Mar 7 01:12:40.619757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-449565f73dc31beed9a3fb34c76a004c2472501cd314019a0f3c0981336c57e3-rootfs.mount: Deactivated successfully. Mar 7 01:12:40.619893 systemd[1]: var-lib-kubelet-pods-64431922\x2d4fe4\x2d4f7b\x2d9d0f\x2d612c5d8b318c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxtvt.mount: Deactivated successfully. Mar 7 01:12:40.619997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88-rootfs.mount: Deactivated successfully. Mar 7 01:12:40.620080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88-shm.mount: Deactivated successfully. Mar 7 01:12:40.620170 systemd[1]: var-lib-kubelet-pods-2a28c2e3\x2d1262\x2d4c06\x2d995d\x2d2412396ce53a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzbv9.mount: Deactivated successfully. Mar 7 01:12:40.620260 systemd[1]: var-lib-kubelet-pods-2a28c2e3\x2d1262\x2d4c06\x2d995d\x2d2412396ce53a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 01:12:40.620344 systemd[1]: var-lib-kubelet-pods-2a28c2e3\x2d1262\x2d4c06\x2d995d\x2d2412396ce53a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 01:12:41.032953 systemd[1]: Removed slice kubepods-burstable-pod2a28c2e3_1262_4c06_995d_2412396ce53a.slice - libcontainer container kubepods-burstable-pod2a28c2e3_1262_4c06_995d_2412396ce53a.slice. Mar 7 01:12:41.033207 systemd[1]: kubepods-burstable-pod2a28c2e3_1262_4c06_995d_2412396ce53a.slice: Consumed 8.369s CPU time. Mar 7 01:12:41.338568 kubelet[3195]: I0307 01:12:41.338449 3195 setters.go:618] "Node became not ready" node="ip-172-31-24-34" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T01:12:41Z","lastTransitionTime":"2026-03-07T01:12:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 01:12:41.558564 sshd[4957]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:41.563745 systemd[1]: sshd@21-172.31.24.34:22-68.220.241.50:38190.service: Deactivated successfully. Mar 7 01:12:41.564240 systemd-logind[1955]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:12:41.567114 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:12:41.568334 systemd-logind[1955]: Removed session 22. Mar 7 01:12:41.598442 kubelet[3195]: I0307 01:12:41.597987 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a28c2e3-1262-4c06-995d-2412396ce53a" path="/var/lib/kubelet/pods/2a28c2e3-1262-4c06-995d-2412396ce53a/volumes" Mar 7 01:12:41.598767 kubelet[3195]: I0307 01:12:41.598738 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64431922-4fe4-4f7b-9d0f-612c5d8b318c" path="/var/lib/kubelet/pods/64431922-4fe4-4f7b-9d0f-612c5d8b318c/volumes" Mar 7 01:12:41.655728 systemd[1]: Started sshd@22-172.31.24.34:22-68.220.241.50:38200.service - OpenSSH per-connection server daemon (68.220.241.50:38200). Mar 7 01:12:42.144918 sshd[5121]: Accepted publickey for core from 68.220.241.50 port 38200 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:42.146148 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:42.151405 systemd-logind[1955]: New session 23 of user core. Mar 7 01:12:42.156067 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:12:42.604174 ntpd[1947]: Deleting interface #12 lxc_health, fe80::9895:7aff:fe93:d343%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Mar 7 01:12:42.604559 ntpd[1947]: 7 Mar 01:12:42 ntpd[1947]: Deleting interface #12 lxc_health, fe80::9895:7aff:fe93:d343%8#123, interface stats: received=0, sent=0, dropped=0, active_time=64 secs Mar 7 01:12:43.229671 systemd[1]: Created slice kubepods-burstable-pod7c5660a6_0a81_46dd_9996_b2503cb3884a.slice - libcontainer container kubepods-burstable-pod7c5660a6_0a81_46dd_9996_b2503cb3884a.slice. Mar 7 01:12:43.258201 sshd[5121]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:43.264027 systemd[1]: sshd@22-172.31.24.34:22-68.220.241.50:38200.service: Deactivated successfully. Mar 7 01:12:43.268610 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:12:43.269082 kubelet[3195]: I0307 01:12:43.269049 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c5660a6-0a81-46dd-9996-b2503cb3884a-clustermesh-secrets\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269461 kubelet[3195]: I0307 01:12:43.269095 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-host-proc-sys-net\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269461 kubelet[3195]: I0307 01:12:43.269120 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-cilium-cgroup\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269461 kubelet[3195]: I0307 01:12:43.269144 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c5660a6-0a81-46dd-9996-b2503cb3884a-cilium-config-path\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269461 kubelet[3195]: I0307 01:12:43.269168 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c5660a6-0a81-46dd-9996-b2503cb3884a-cilium-ipsec-secrets\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269461 kubelet[3195]: I0307 01:12:43.269195 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-hostproc\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269461 kubelet[3195]: I0307 01:12:43.269219 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-xtables-lock\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269720 kubelet[3195]: I0307 01:12:43.269243 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c5660a6-0a81-46dd-9996-b2503cb3884a-hubble-tls\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269720 kubelet[3195]: I0307 01:12:43.269277 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992mh\" (UniqueName: \"kubernetes.io/projected/7c5660a6-0a81-46dd-9996-b2503cb3884a-kube-api-access-992mh\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269720 kubelet[3195]: I0307 01:12:43.269301 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-bpf-maps\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269720 kubelet[3195]: I0307 01:12:43.269330 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-cni-path\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269720 kubelet[3195]: I0307 01:12:43.269352 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-etc-cni-netd\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.269720 kubelet[3195]: I0307 01:12:43.269382 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-cilium-run\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.270016 kubelet[3195]: I0307 01:12:43.269404 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-lib-modules\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.270016 kubelet[3195]: I0307 01:12:43.269431 3195 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c5660a6-0a81-46dd-9996-b2503cb3884a-host-proc-sys-kernel\") pod \"cilium-84plb\" (UID: \"7c5660a6-0a81-46dd-9996-b2503cb3884a\") " pod="kube-system/cilium-84plb" Mar 7 01:12:43.270725 systemd-logind[1955]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:12:43.273566 systemd-logind[1955]: Removed session 23. Mar 7 01:12:43.354732 systemd[1]: Started sshd@23-172.31.24.34:22-68.220.241.50:46114.service - OpenSSH per-connection server daemon (68.220.241.50:46114). Mar 7 01:12:43.548056 containerd[1977]: time="2026-03-07T01:12:43.547940175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-84plb,Uid:7c5660a6-0a81-46dd-9996-b2503cb3884a,Namespace:kube-system,Attempt:0,}" Mar 7 01:12:43.582212 containerd[1977]: time="2026-03-07T01:12:43.582105119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:12:43.582212 containerd[1977]: time="2026-03-07T01:12:43.582173755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:12:43.582212 containerd[1977]: time="2026-03-07T01:12:43.582201940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:12:43.583220 containerd[1977]: time="2026-03-07T01:12:43.582410210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:12:43.608068 systemd[1]: Started cri-containerd-948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e.scope - libcontainer container 948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e. Mar 7 01:12:43.638056 containerd[1977]: time="2026-03-07T01:12:43.638010371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-84plb,Uid:7c5660a6-0a81-46dd-9996-b2503cb3884a,Namespace:kube-system,Attempt:0,} returns sandbox id \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\"" Mar 7 01:12:43.648183 containerd[1977]: time="2026-03-07T01:12:43.648056084Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:12:43.667017 containerd[1977]: time="2026-03-07T01:12:43.666966739Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169\"" Mar 7 01:12:43.668016 containerd[1977]: time="2026-03-07T01:12:43.667972448Z" level=info msg="StartContainer for \"a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169\"" Mar 7 01:12:43.702066 systemd[1]: Started cri-containerd-a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169.scope - libcontainer container a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169. Mar 7 01:12:43.731878 containerd[1977]: time="2026-03-07T01:12:43.731817973Z" level=info msg="StartContainer for \"a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169\" returns successfully" Mar 7 01:12:43.847136 sshd[5133]: Accepted publickey for core from 68.220.241.50 port 46114 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:43.848438 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:43.853670 systemd-logind[1955]: New session 24 of user core. Mar 7 01:12:43.861079 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:12:44.011839 systemd[1]: cri-containerd-a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169.scope: Deactivated successfully. Mar 7 01:12:44.073189 containerd[1977]: time="2026-03-07T01:12:44.073125922Z" level=info msg="shim disconnected" id=a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169 namespace=k8s.io Mar 7 01:12:44.073189 containerd[1977]: time="2026-03-07T01:12:44.073177978Z" level=warning msg="cleaning up after shim disconnected" id=a91202989f94e17a1b26685f60afa6a451a797a0ca7a1f4c1bf9990a1335e169 namespace=k8s.io Mar 7 01:12:44.073189 containerd[1977]: time="2026-03-07T01:12:44.073190069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:44.200692 sshd[5133]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:44.205114 systemd-logind[1955]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:12:44.206037 systemd[1]: sshd@23-172.31.24.34:22-68.220.241.50:46114.service: Deactivated successfully. Mar 7 01:12:44.208617 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:12:44.209818 systemd-logind[1955]: Removed session 24. Mar 7 01:12:44.292232 systemd[1]: Started sshd@24-172.31.24.34:22-68.220.241.50:46130.service - OpenSSH per-connection server daemon (68.220.241.50:46130). Mar 7 01:12:44.780912 sshd[5247]: Accepted publickey for core from 68.220.241.50 port 46130 ssh2: RSA SHA256:0PS0FBgqn6GWl/nQsMeHlwIixP16R4Q8OHmWUJZFPy8 Mar 7 01:12:44.781942 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:12:44.787704 systemd-logind[1955]: New session 25 of user core. Mar 7 01:12:44.793109 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:12:44.810437 kubelet[3195]: E0307 01:12:44.810388 3195 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:12:45.037409 containerd[1977]: time="2026-03-07T01:12:45.035675100Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:12:45.065482 containerd[1977]: time="2026-03-07T01:12:45.065054298Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96\"" Mar 7 01:12:45.066769 containerd[1977]: time="2026-03-07T01:12:45.066679450Z" level=info msg="StartContainer for \"49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96\"" Mar 7 01:12:45.122112 systemd[1]: Started cri-containerd-49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96.scope - libcontainer container 49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96. Mar 7 01:12:45.156186 containerd[1977]: time="2026-03-07T01:12:45.155889471Z" level=info msg="StartContainer for \"49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96\" returns successfully" Mar 7 01:12:45.364556 systemd[1]: cri-containerd-49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96.scope: Deactivated successfully. Mar 7 01:12:45.387721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96-rootfs.mount: Deactivated successfully. Mar 7 01:12:45.415796 containerd[1977]: time="2026-03-07T01:12:45.415705185Z" level=info msg="shim disconnected" id=49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96 namespace=k8s.io Mar 7 01:12:45.415796 containerd[1977]: time="2026-03-07T01:12:45.415791999Z" level=warning msg="cleaning up after shim disconnected" id=49a5644c1d67644b94ba5ac3157f65c37e2f679a4d6bafd93ab67f1559aa3e96 namespace=k8s.io Mar 7 01:12:45.415796 containerd[1977]: time="2026-03-07T01:12:45.415804136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:46.040536 containerd[1977]: time="2026-03-07T01:12:46.040373807Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:12:46.080604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201911865.mount: Deactivated successfully. Mar 7 01:12:46.093000 containerd[1977]: time="2026-03-07T01:12:46.092956492Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1\"" Mar 7 01:12:46.094391 containerd[1977]: time="2026-03-07T01:12:46.094321971Z" level=info msg="StartContainer for \"d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1\"" Mar 7 01:12:46.133094 systemd[1]: Started cri-containerd-d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1.scope - libcontainer container d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1. Mar 7 01:12:46.178529 containerd[1977]: time="2026-03-07T01:12:46.178326301Z" level=info msg="StartContainer for \"d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1\" returns successfully" Mar 7 01:12:46.316455 systemd[1]: cri-containerd-d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1.scope: Deactivated successfully. Mar 7 01:12:46.361372 containerd[1977]: time="2026-03-07T01:12:46.361258458Z" level=info msg="shim disconnected" id=d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1 namespace=k8s.io Mar 7 01:12:46.361605 containerd[1977]: time="2026-03-07T01:12:46.361377439Z" level=warning msg="cleaning up after shim disconnected" id=d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1 namespace=k8s.io Mar 7 01:12:46.361605 containerd[1977]: time="2026-03-07T01:12:46.361391320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:46.387552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d36163b864b4a0d7e14ea235b553170233a63788f0eb6d4f33b067e69cee89f1-rootfs.mount: Deactivated successfully. Mar 7 01:12:47.044907 containerd[1977]: time="2026-03-07T01:12:47.044845434Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:12:47.069231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2069014003.mount: Deactivated successfully. Mar 7 01:12:47.075646 containerd[1977]: time="2026-03-07T01:12:47.075586975Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7\"" Mar 7 01:12:47.076394 containerd[1977]: time="2026-03-07T01:12:47.076355528Z" level=info msg="StartContainer for \"5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7\"" Mar 7 01:12:47.117058 systemd[1]: Started cri-containerd-5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7.scope - libcontainer container 5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7. Mar 7 01:12:47.142929 systemd[1]: cri-containerd-5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7.scope: Deactivated successfully. Mar 7 01:12:47.146315 containerd[1977]: time="2026-03-07T01:12:47.146260607Z" level=info msg="StartContainer for \"5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7\" returns successfully" Mar 7 01:12:47.174988 containerd[1977]: time="2026-03-07T01:12:47.174926272Z" level=info msg="shim disconnected" id=5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7 namespace=k8s.io Mar 7 01:12:47.174988 containerd[1977]: time="2026-03-07T01:12:47.174985978Z" level=warning msg="cleaning up after shim disconnected" id=5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7 namespace=k8s.io Mar 7 01:12:47.174988 containerd[1977]: time="2026-03-07T01:12:47.174997504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:12:47.387711 systemd[1]: run-containerd-runc-k8s.io-5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7-runc.5vzuRv.mount: Deactivated successfully. Mar 7 01:12:47.387834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5471a60c4318123afb20af9b341f1f7c9f67da7bd1b60b4d86a16cac22add4f7-rootfs.mount: Deactivated successfully. Mar 7 01:12:48.051397 containerd[1977]: time="2026-03-07T01:12:48.051214685Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:12:48.080981 containerd[1977]: time="2026-03-07T01:12:48.080927801Z" level=info msg="CreateContainer within sandbox \"948b3cf4d9a17a78dd3bd88ebb13386456f212f70a90557c6b2922bc2ef3a27e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb\"" Mar 7 01:12:48.081685 containerd[1977]: time="2026-03-07T01:12:48.081648190Z" level=info msg="StartContainer for \"ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb\"" Mar 7 01:12:48.122089 systemd[1]: Started cri-containerd-ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb.scope - libcontainer container ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb. Mar 7 01:12:48.157288 containerd[1977]: time="2026-03-07T01:12:48.157156197Z" level=info msg="StartContainer for \"ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb\" returns successfully" Mar 7 01:12:50.614341 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 01:12:52.206704 systemd[1]: run-containerd-runc-k8s.io-ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb-runc.KE2Sc0.mount: Deactivated successfully. Mar 7 01:12:53.646671 systemd-networkd[1863]: lxc_health: Link UP Mar 7 01:12:53.655230 (udev-worker)[5990]: Network interface NamePolicy= disabled on kernel command line. Mar 7 01:12:53.696440 systemd-networkd[1863]: lxc_health: Gained carrier Mar 7 01:12:54.761080 systemd-networkd[1863]: lxc_health: Gained IPv6LL Mar 7 01:12:55.581443 kubelet[3195]: I0307 01:12:55.580469 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-84plb" podStartSLOduration=12.580447196 podStartE2EDuration="12.580447196s" podCreationTimestamp="2026-03-07 01:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:12:50.067988029 +0000 UTC m=+110.622459402" watchObservedRunningTime="2026-03-07 01:12:55.580447196 +0000 UTC m=+116.134918570" Mar 7 01:12:56.830811 systemd[1]: run-containerd-runc-k8s.io-ddba24d82aaf4c600d2ef89a46f0dc9f5fd081c22d7068c548890156c27b0cdb-runc.mwFYGW.mount: Deactivated successfully. Mar 7 01:12:57.604326 ntpd[1947]: Listen normally on 15 lxc_health [fe80::d01a:3bff:fe6e:148f%14]:123 Mar 7 01:12:57.605742 ntpd[1947]: 7 Mar 01:12:57 ntpd[1947]: Listen normally on 15 lxc_health [fe80::d01a:3bff:fe6e:148f%14]:123 Mar 7 01:12:59.163961 sshd[5247]: pam_unix(sshd:session): session closed for user core Mar 7 01:12:59.168469 systemd-logind[1955]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:12:59.169497 systemd[1]: sshd@24-172.31.24.34:22-68.220.241.50:46130.service: Deactivated successfully. Mar 7 01:12:59.172534 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:12:59.173900 systemd-logind[1955]: Removed session 25. Mar 7 01:12:59.651251 containerd[1977]: time="2026-03-07T01:12:59.651210887Z" level=info msg="StopPodSandbox for \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\"" Mar 7 01:12:59.651907 containerd[1977]: time="2026-03-07T01:12:59.651553231Z" level=info msg="TearDown network for sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" successfully" Mar 7 01:12:59.651907 containerd[1977]: time="2026-03-07T01:12:59.651576559Z" level=info msg="StopPodSandbox for \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" returns successfully" Mar 7 01:12:59.654296 containerd[1977]: time="2026-03-07T01:12:59.652385105Z" level=info msg="RemovePodSandbox for \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\"" Mar 7 01:12:59.662855 containerd[1977]: time="2026-03-07T01:12:59.662802795Z" level=info msg="Forcibly stopping sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\"" Mar 7 01:12:59.663039 containerd[1977]: time="2026-03-07T01:12:59.662962579Z" level=info msg="TearDown network for sandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" successfully" Mar 7 01:12:59.666900 containerd[1977]: time="2026-03-07T01:12:59.666840220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:12:59.667026 containerd[1977]: time="2026-03-07T01:12:59.666929175Z" level=info msg="RemovePodSandbox \"4686fa7115e02de717001fd487abd88c943ae6fc2c01a0fccc225f7cd2c87f88\" returns successfully" Mar 7 01:12:59.667488 containerd[1977]: time="2026-03-07T01:12:59.667459574Z" level=info msg="StopPodSandbox for \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\"" Mar 7 01:12:59.667572 containerd[1977]: time="2026-03-07T01:12:59.667551070Z" level=info msg="TearDown network for sandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" successfully" Mar 7 01:12:59.667572 containerd[1977]: time="2026-03-07T01:12:59.667566363Z" level=info msg="StopPodSandbox for \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" returns successfully" Mar 7 01:12:59.667994 containerd[1977]: time="2026-03-07T01:12:59.667963306Z" level=info msg="RemovePodSandbox for \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\"" Mar 7 01:12:59.668077 containerd[1977]: time="2026-03-07T01:12:59.668012976Z" level=info msg="Forcibly stopping sandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\"" Mar 7 01:12:59.668134 containerd[1977]: time="2026-03-07T01:12:59.668079195Z" level=info msg="TearDown network for sandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" successfully" Mar 7 01:12:59.670970 containerd[1977]: time="2026-03-07T01:12:59.670928664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:12:59.671052 containerd[1977]: time="2026-03-07T01:12:59.670977641Z" level=info msg="RemovePodSandbox \"6287a5df5e53ee12718798fa80918097a81b7c3d9dbf4d3c6c54038d9e81dd2a\" returns successfully"