Mar 14 00:22:33.996762 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:22:33.996799 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:33.996818 kernel: BIOS-provided physical RAM map: Mar 14 00:22:33.996828 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 14 00:22:33.996840 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Mar 14 00:22:33.996852 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Mar 14 00:22:33.996864 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Mar 14 00:22:33.996876 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Mar 14 00:22:33.996888 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Mar 14 00:22:33.996904 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Mar 14 00:22:33.996917 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Mar 14 00:22:33.996930 kernel: NX (Execute Disable) protection: active Mar 14 00:22:33.996942 kernel: APIC: Static calls initialized Mar 14 00:22:33.996955 kernel: efi: EFI v2.7 by EDK II Mar 14 00:22:33.996972 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77015518 Mar 14 00:22:33.996989 kernel: SMBIOS 2.7 present. Mar 14 00:22:33.997002 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Mar 14 00:22:33.997017 kernel: Hypervisor detected: KVM Mar 14 00:22:33.997031 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:22:33.997045 kernel: kvm-clock: using sched offset of 3867712792 cycles Mar 14 00:22:33.997059 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:22:33.997073 kernel: tsc: Detected 2499.998 MHz processor Mar 14 00:22:33.997088 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:22:33.997102 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:22:33.997117 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Mar 14 00:22:33.997134 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 14 00:22:33.997149 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:22:33.997163 kernel: Using GB pages for direct mapping Mar 14 00:22:33.997177 kernel: Secure boot disabled Mar 14 00:22:33.997191 kernel: ACPI: Early table checksum verification disabled Mar 14 00:22:33.997205 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Mar 14 00:22:33.997220 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:22:33.997235 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:22:33.997249 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:22:33.997266 kernel: ACPI: FACS 0x00000000789D0000 000040 Mar 14 00:22:33.997294 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Mar 14 00:22:33.997308 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:22:33.997321 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:22:33.997335 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Mar 14 00:22:33.997349 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Mar 14 00:22:33.997370 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:22:33.997386 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Mar 14 00:22:33.997399 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Mar 14 00:22:33.997413 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Mar 14 00:22:33.997427 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Mar 14 00:22:33.997442 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Mar 14 00:22:33.997455 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Mar 14 00:22:33.997469 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Mar 14 00:22:33.997487 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Mar 14 00:22:33.997501 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Mar 14 00:22:33.997514 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Mar 14 00:22:33.997528 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Mar 14 00:22:33.997542 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Mar 14 00:22:33.997556 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Mar 14 00:22:33.997571 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 00:22:33.997585 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 00:22:33.997598 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Mar 14 00:22:33.997616 kernel: NUMA: Initialized distance table, cnt=1 Mar 14 00:22:33.997630 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Mar 14 00:22:33.997645 kernel: Zone ranges: Mar 14 00:22:33.997661 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:22:33.997676 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Mar 14 00:22:33.997693 kernel: Normal empty Mar 14 00:22:33.997709 kernel: Movable zone start for each node Mar 14 00:22:33.997725 kernel: Early memory node ranges Mar 14 00:22:33.997740 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 14 00:22:33.997760 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Mar 14 00:22:33.997776 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Mar 14 00:22:33.997792 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Mar 14 00:22:33.997808 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:22:33.997824 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 14 00:22:33.997840 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Mar 14 00:22:33.997855 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Mar 14 00:22:33.997871 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 14 00:22:33.997889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:22:33.997905 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Mar 14 00:22:33.997924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:22:33.997940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:22:33.997955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:22:33.997971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:22:33.997987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:22:33.998002 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:22:33.998017 kernel: TSC deadline timer available Mar 14 00:22:33.998031 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 14 00:22:33.998045 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:22:33.998063 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Mar 14 00:22:33.998078 kernel: Booting paravirtualized kernel on KVM Mar 14 00:22:33.998092 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:22:33.998107 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 14 00:22:33.998121 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Mar 14 00:22:33.998135 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Mar 14 00:22:33.998150 kernel: pcpu-alloc: [0] 0 1 Mar 14 00:22:33.998165 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:22:33.998180 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:22:33.998200 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:33.998215 kernel: random: crng init done Mar 14 00:22:33.998229 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:22:33.998244 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 00:22:33.998257 kernel: Fallback order for Node 0: 0 Mar 14 00:22:33.998273 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Mar 14 00:22:33.999397 kernel: Policy zone: DMA32 Mar 14 00:22:33.999416 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:22:33.999437 kernel: Memory: 1874628K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162916K reserved, 0K cma-reserved) Mar 14 00:22:33.999452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:22:33.999467 kernel: Kernel/User page tables isolation: enabled Mar 14 00:22:33.999481 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:22:33.999496 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:22:33.999510 kernel: Dynamic Preempt: voluntary Mar 14 00:22:33.999523 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:22:33.999539 kernel: rcu: RCU event tracing is enabled. Mar 14 00:22:33.999554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:22:33.999571 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:22:33.999587 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:22:33.999600 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:22:33.999615 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:22:33.999630 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:22:33.999644 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 14 00:22:33.999659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:22:33.999688 kernel: Console: colour dummy device 80x25 Mar 14 00:22:33.999704 kernel: printk: console [tty0] enabled Mar 14 00:22:33.999719 kernel: printk: console [ttyS0] enabled Mar 14 00:22:33.999734 kernel: ACPI: Core revision 20230628 Mar 14 00:22:33.999751 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Mar 14 00:22:33.999769 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:22:33.999784 kernel: x2apic enabled Mar 14 00:22:33.999800 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:22:33.999816 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 14 00:22:33.999832 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 14 00:22:33.999850 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Mar 14 00:22:33.999866 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Mar 14 00:22:33.999882 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:22:33.999897 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:22:33.999912 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:22:33.999927 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Mar 14 00:22:33.999943 kernel: RETBleed: Vulnerable Mar 14 00:22:33.999958 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:22:33.999973 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:33.999988 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:22:34.000007 kernel: GDS: Unknown: Dependent on hypervisor status Mar 14 00:22:34.000022 kernel: active return thunk: its_return_thunk Mar 14 00:22:34.000037 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 00:22:34.000053 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:22:34.000068 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:22:34.000083 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:22:34.000099 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Mar 14 00:22:34.000114 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Mar 14 00:22:34.000129 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Mar 14 00:22:34.000145 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Mar 14 00:22:34.000160 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Mar 14 00:22:34.000178 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Mar 14 00:22:34.000193 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:22:34.000208 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Mar 14 00:22:34.000224 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Mar 14 00:22:34.000239 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Mar 14 00:22:34.000253 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Mar 14 00:22:34.000267 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Mar 14 00:22:34.000305 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Mar 14 00:22:34.000331 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Mar 14 00:22:34.000346 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:22:34.000360 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:22:34.000374 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:22:34.000393 kernel: landlock: Up and running. Mar 14 00:22:34.000408 kernel: SELinux: Initializing. Mar 14 00:22:34.000423 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:22:34.000438 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 00:22:34.000466 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8175M CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x4) Mar 14 00:22:34.000481 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:34.000495 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:34.000508 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:22:34.000524 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Mar 14 00:22:34.000540 kernel: signal: max sigframe size: 3632 Mar 14 00:22:34.000562 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:22:34.000576 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:22:34.000590 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:22:34.000604 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:22:34.000618 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:22:34.000634 kernel: .... node #0, CPUs: #1 Mar 14 00:22:34.000651 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 14 00:22:34.000669 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 14 00:22:34.000689 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:22:34.000706 kernel: smpboot: Max logical packages: 1 Mar 14 00:22:34.000723 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 14 00:22:34.000740 kernel: devtmpfs: initialized Mar 14 00:22:34.000756 kernel: x86/mm: Memory block size: 128MB Mar 14 00:22:34.000771 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Mar 14 00:22:34.000788 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:22:34.000805 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:22:34.000820 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:22:34.000839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:22:34.000855 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:22:34.000871 kernel: audit: type=2000 audit(1773447753.063:1): state=initialized audit_enabled=0 res=1 Mar 14 00:22:34.000886 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:22:34.000902 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:22:34.000917 kernel: cpuidle: using governor menu Mar 14 00:22:34.000932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:22:34.000949 kernel: dca service started, version 1.12.1 Mar 14 00:22:34.000964 kernel: PCI: Using configuration type 1 for base access Mar 14 00:22:34.000983 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:22:34.000999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:22:34.001014 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:22:34.001030 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:22:34.001046 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:22:34.001061 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:22:34.001077 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:22:34.001092 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:22:34.001108 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 14 00:22:34.001127 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:22:34.001142 kernel: ACPI: Interpreter enabled Mar 14 00:22:34.001158 kernel: ACPI: PM: (supports S0 S5) Mar 14 00:22:34.001173 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:22:34.001188 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:22:34.001204 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:22:34.001219 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 14 00:22:34.001235 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:22:34.002692 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:22:34.010176 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 14 00:22:34.010362 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 14 00:22:34.010386 kernel: acpiphp: Slot [3] registered Mar 14 00:22:34.010403 kernel: acpiphp: Slot [4] registered Mar 14 00:22:34.010419 kernel: acpiphp: Slot [5] registered Mar 14 00:22:34.010433 kernel: acpiphp: Slot [6] registered Mar 14 00:22:34.010447 kernel: acpiphp: Slot [7] registered Mar 14 00:22:34.010463 kernel: acpiphp: Slot [8] registered Mar 14 00:22:34.010482 kernel: acpiphp: Slot [9] registered Mar 14 00:22:34.010497 kernel: acpiphp: Slot [10] registered Mar 14 00:22:34.010512 kernel: acpiphp: Slot [11] registered Mar 14 00:22:34.010527 kernel: acpiphp: Slot [12] registered Mar 14 00:22:34.010542 kernel: acpiphp: Slot [13] registered Mar 14 00:22:34.010556 kernel: acpiphp: Slot [14] registered Mar 14 00:22:34.010571 kernel: acpiphp: Slot [15] registered Mar 14 00:22:34.010585 kernel: acpiphp: Slot [16] registered Mar 14 00:22:34.010599 kernel: acpiphp: Slot [17] registered Mar 14 00:22:34.010616 kernel: acpiphp: Slot [18] registered Mar 14 00:22:34.010630 kernel: acpiphp: Slot [19] registered Mar 14 00:22:34.010645 kernel: acpiphp: Slot [20] registered Mar 14 00:22:34.010660 kernel: acpiphp: Slot [21] registered Mar 14 00:22:34.010675 kernel: acpiphp: Slot [22] registered Mar 14 00:22:34.010690 kernel: acpiphp: Slot [23] registered Mar 14 00:22:34.010705 kernel: acpiphp: Slot [24] registered Mar 14 00:22:34.010720 kernel: acpiphp: Slot [25] registered Mar 14 00:22:34.010734 kernel: acpiphp: Slot [26] registered Mar 14 00:22:34.010749 kernel: acpiphp: Slot [27] registered Mar 14 00:22:34.010767 kernel: acpiphp: Slot [28] registered Mar 14 00:22:34.010782 kernel: acpiphp: Slot [29] registered Mar 14 00:22:34.010797 kernel: acpiphp: Slot [30] registered Mar 14 00:22:34.010812 kernel: acpiphp: Slot [31] registered Mar 14 00:22:34.010827 kernel: PCI host bridge to bus 0000:00 Mar 14 00:22:34.010981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:22:34.011105 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:22:34.011228 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:22:34.012039 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 14 00:22:34.012194 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Mar 14 00:22:34.012346 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:22:34.012513 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 14 00:22:34.012754 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 14 00:22:34.012992 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Mar 14 00:22:34.013145 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 14 00:22:34.013321 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Mar 14 00:22:34.013478 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Mar 14 00:22:34.013623 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Mar 14 00:22:34.013773 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Mar 14 00:22:34.013921 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Mar 14 00:22:34.014057 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Mar 14 00:22:34.014207 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Mar 14 00:22:34.014715 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Mar 14 00:22:34.014884 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 14 00:22:34.015025 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Mar 14 00:22:34.015169 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:22:34.015634 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:22:34.015785 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Mar 14 00:22:34.015937 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:22:34.016078 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Mar 14 00:22:34.016100 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:22:34.016117 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:22:34.016133 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:22:34.016149 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:22:34.016166 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 14 00:22:34.016186 kernel: iommu: Default domain type: Translated Mar 14 00:22:34.016203 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:22:34.016220 kernel: efivars: Registered efivars operations Mar 14 00:22:34.016236 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:22:34.016252 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:22:34.016268 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Mar 14 00:22:34.016514 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Mar 14 00:22:34.016664 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Mar 14 00:22:34.016805 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Mar 14 00:22:34.016947 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:22:34.016968 kernel: vgaarb: loaded Mar 14 00:22:34.016984 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Mar 14 00:22:34.017001 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Mar 14 00:22:34.017018 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:22:34.017035 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:22:34.017052 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:22:34.017070 kernel: pnp: PnP ACPI init Mar 14 00:22:34.017086 kernel: pnp: PnP ACPI: found 5 devices Mar 14 00:22:34.017107 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:22:34.017124 kernel: NET: Registered PF_INET protocol family Mar 14 00:22:34.017142 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:22:34.017159 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 14 00:22:34.017176 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:22:34.017193 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 00:22:34.017210 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 14 00:22:34.017227 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 14 00:22:34.017245 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:22:34.017265 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 00:22:34.017306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:22:34.017324 kernel: NET: Registered PF_XDP protocol family Mar 14 00:22:34.017458 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:22:34.017583 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:22:34.017715 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:22:34.017836 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 14 00:22:34.017957 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Mar 14 00:22:34.018104 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 14 00:22:34.018126 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:22:34.018143 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 00:22:34.018161 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 14 00:22:34.018178 kernel: clocksource: Switched to clocksource tsc Mar 14 00:22:34.018195 kernel: Initialise system trusted keyrings Mar 14 00:22:34.018212 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 14 00:22:34.018229 kernel: Key type asymmetric registered Mar 14 00:22:34.018249 kernel: Asymmetric key parser 'x509' registered Mar 14 00:22:34.018266 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:22:34.019097 kernel: io scheduler mq-deadline registered Mar 14 00:22:34.019119 kernel: io scheduler kyber registered Mar 14 00:22:34.019135 kernel: io scheduler bfq registered Mar 14 00:22:34.019152 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:22:34.019169 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:22:34.019186 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:22:34.019202 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:22:34.019224 kernel: i8042: Warning: Keylock active Mar 14 00:22:34.019241 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:22:34.019257 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:22:34.019457 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 14 00:22:34.019591 kernel: rtc_cmos 00:00: registered as rtc0 Mar 14 00:22:34.019719 kernel: rtc_cmos 00:00: setting system clock to 2026-03-14T00:22:33 UTC (1773447753) Mar 14 00:22:34.019846 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 14 00:22:34.019866 kernel: intel_pstate: CPU model not supported Mar 14 00:22:34.019888 kernel: efifb: probing for efifb Mar 14 00:22:34.019905 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Mar 14 00:22:34.019922 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Mar 14 00:22:34.019938 kernel: efifb: scrolling: redraw Mar 14 00:22:34.019955 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 14 00:22:34.019970 kernel: Console: switching to colour frame buffer device 100x37 Mar 14 00:22:34.019986 kernel: fb0: EFI VGA frame buffer device Mar 14 00:22:34.020001 kernel: pstore: Using crash dump compression: deflate Mar 14 00:22:34.020018 kernel: pstore: Registered efi_pstore as persistent store backend Mar 14 00:22:34.020039 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:22:34.020055 kernel: Segment Routing with IPv6 Mar 14 00:22:34.020072 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:22:34.020089 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:22:34.020106 kernel: Key type dns_resolver registered Mar 14 00:22:34.020123 kernel: IPI shorthand broadcast: enabled Mar 14 00:22:34.020169 kernel: sched_clock: Marking stable (540008436, 176297057)->(822584263, -106278770) Mar 14 00:22:34.020189 kernel: registered taskstats version 1 Mar 14 00:22:34.020206 kernel: Loading compiled-in X.509 certificates Mar 14 00:22:34.020223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:22:34.020244 kernel: Key type .fscrypt registered Mar 14 00:22:34.020261 kernel: Key type fscrypt-provisioning registered Mar 14 00:22:34.020354 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:22:34.020372 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:22:34.020390 kernel: ima: No architecture policies found Mar 14 00:22:34.020406 kernel: clk: Disabling unused clocks Mar 14 00:22:34.020424 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:22:34.020441 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:22:34.020459 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:22:34.020481 kernel: Run /init as init process Mar 14 00:22:34.020499 kernel: with arguments: Mar 14 00:22:34.020516 kernel: /init Mar 14 00:22:34.020533 kernel: with environment: Mar 14 00:22:34.020546 kernel: HOME=/ Mar 14 00:22:34.020559 kernel: TERM=linux Mar 14 00:22:34.020577 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:34.020597 systemd[1]: Detected virtualization amazon. Mar 14 00:22:34.020617 systemd[1]: Detected architecture x86-64. Mar 14 00:22:34.020633 systemd[1]: Running in initrd. Mar 14 00:22:34.020650 systemd[1]: No hostname configured, using default hostname. Mar 14 00:22:34.020668 systemd[1]: Hostname set to . Mar 14 00:22:34.020687 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:22:34.020705 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:22:34.020723 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:34.020741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:34.020763 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:22:34.020782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:34.020801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:22:34.020822 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:22:34.020846 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:22:34.020865 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:22:34.020884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:34.020903 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:34.020921 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:34.020940 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:34.020958 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:34.020980 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:34.021001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:34.021020 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:34.021039 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:22:34.021057 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:22:34.021075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:34.021093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:34.021112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:34.021130 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:34.021152 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:22:34.021170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:34.021189 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:22:34.021207 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:22:34.021226 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:34.021245 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:34.021263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:34.021296 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:34.021315 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:34.021366 systemd-journald[179]: Collecting audit messages is disabled. Mar 14 00:22:34.021408 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:22:34.021431 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:22:34.021449 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:22:34.021468 systemd-journald[179]: Journal started Mar 14 00:22:34.021506 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b8fc578d56b93b6c065a523fec2d2) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:22:33.996726 systemd-modules-load[180]: Inserted module 'overlay' Mar 14 00:22:34.031356 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:34.036738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:34.038512 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:22:34.047568 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:34.052586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:34.057488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:34.063515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:22:34.070346 kernel: Bridge firewalling registered Mar 14 00:22:34.071381 systemd-modules-load[180]: Inserted module 'br_netfilter' Mar 14 00:22:34.073470 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:34.084559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:34.085525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:34.096885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:34.099455 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:34.110595 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:22:34.111746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:34.115493 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:34.128790 dracut-cmdline[211]: dracut-dracut-053 Mar 14 00:22:34.133488 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:22:34.167277 systemd-resolved[214]: Positive Trust Anchors: Mar 14 00:22:34.168341 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:34.168409 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:34.174628 systemd-resolved[214]: Defaulting to hostname 'linux'. Mar 14 00:22:34.178146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:34.179549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:34.224378 kernel: SCSI subsystem initialized Mar 14 00:22:34.234314 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:22:34.246311 kernel: iscsi: registered transport (tcp) Mar 14 00:22:34.268797 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:22:34.268882 kernel: QLogic iSCSI HBA Driver Mar 14 00:22:34.309584 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:34.321583 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:22:34.350128 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:22:34.350206 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:22:34.350229 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:22:34.395323 kernel: raid6: avx512x4 gen() 16700 MB/s Mar 14 00:22:34.413323 kernel: raid6: avx512x2 gen() 17838 MB/s Mar 14 00:22:34.431320 kernel: raid6: avx512x1 gen() 17758 MB/s Mar 14 00:22:34.449317 kernel: raid6: avx2x4 gen() 17824 MB/s Mar 14 00:22:34.467318 kernel: raid6: avx2x2 gen() 17790 MB/s Mar 14 00:22:34.486526 kernel: raid6: avx2x1 gen() 13520 MB/s Mar 14 00:22:34.486605 kernel: raid6: using algorithm avx512x2 gen() 17838 MB/s Mar 14 00:22:34.506576 kernel: raid6: .... xor() 24459 MB/s, rmw enabled Mar 14 00:22:34.506649 kernel: raid6: using avx512x2 recovery algorithm Mar 14 00:22:34.530325 kernel: xor: automatically using best checksumming function avx Mar 14 00:22:34.694340 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:22:34.705683 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:34.711507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:34.732991 systemd-udevd[397]: Using default interface naming scheme 'v255'. Mar 14 00:22:34.738213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:34.748656 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:22:34.766811 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Mar 14 00:22:34.798722 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:34.802532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:34.856618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:34.865554 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:22:34.892963 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:34.896241 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:34.896873 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:34.898388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:34.907563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:22:34.940075 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:34.969317 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:22:34.982310 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:22:34.982613 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:22:34.984559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:34.984822 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:34.987660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:34.988225 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:34.988572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:34.989174 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:34.999933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:35.004577 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:35.004723 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:35.013976 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Mar 14 00:22:35.021307 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:22:35.020899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:35.035861 kernel: AES CTR mode by8 optimization enabled Mar 14 00:22:35.035899 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:ee:38:1b:68:31 Mar 14 00:22:35.027074 (udev-worker)[455]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:35.056770 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:22:35.057033 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 14 00:22:35.065428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:35.075421 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:22:35.080537 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:22:35.093688 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:22:35.093727 kernel: GPT:9289727 != 33554431 Mar 14 00:22:35.093749 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:22:35.093770 kernel: GPT:9289727 != 33554431 Mar 14 00:22:35.093789 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:22:35.093809 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:35.110509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:35.196313 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (456) Mar 14 00:22:35.205348 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (455) Mar 14 00:22:35.252378 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:22:35.289423 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:22:35.302240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:22:35.308318 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:22:35.308972 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:22:35.316900 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:22:35.324006 disk-uuid[629]: Primary Header is updated. Mar 14 00:22:35.324006 disk-uuid[629]: Secondary Entries is updated. Mar 14 00:22:35.324006 disk-uuid[629]: Secondary Header is updated. Mar 14 00:22:35.332399 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:35.340428 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:35.349329 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:36.348418 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:22:36.349238 disk-uuid[630]: The operation has completed successfully. Mar 14 00:22:36.495629 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:22:36.495761 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:22:36.517573 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:22:36.523050 sh[973]: Success Mar 14 00:22:36.546319 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 14 00:22:36.637620 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:22:36.644732 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:22:36.651092 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:22:36.683644 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:22:36.685355 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:36.685380 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:22:36.689735 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:22:36.689798 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:22:36.766332 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:22:36.776213 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:22:36.777596 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:22:36.782478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:22:36.785487 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:22:36.812729 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:36.815042 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:36.815069 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:22:36.824412 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:22:36.842314 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:36.842750 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:22:36.852380 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:22:36.857486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:22:36.917448 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:36.930730 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:36.955561 systemd-networkd[1165]: lo: Link UP Mar 14 00:22:36.955578 systemd-networkd[1165]: lo: Gained carrier Mar 14 00:22:36.957464 systemd-networkd[1165]: Enumeration completed Mar 14 00:22:36.957951 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:36.957957 systemd-networkd[1165]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:36.959217 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:36.960700 systemd[1]: Reached target network.target - Network. Mar 14 00:22:36.962082 systemd-networkd[1165]: eth0: Link UP Mar 14 00:22:36.962088 systemd-networkd[1165]: eth0: Gained carrier Mar 14 00:22:36.962103 systemd-networkd[1165]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:37.088423 ignition[1098]: Ignition 2.19.0 Mar 14 00:22:37.088435 ignition[1098]: Stage: fetch-offline Mar 14 00:22:37.088636 ignition[1098]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:37.088644 ignition[1098]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:37.089846 ignition[1098]: Ignition finished successfully Mar 14 00:22:37.091520 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:37.103588 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:22:37.120451 ignition[1173]: Ignition 2.19.0 Mar 14 00:22:37.120471 ignition[1173]: Stage: fetch Mar 14 00:22:37.120946 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:37.120960 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:37.121084 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:37.121273 ignition[1173]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:22:37.321495 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #2 Mar 14 00:22:37.321711 ignition[1173]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:22:37.722836 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #3 Mar 14 00:22:37.723013 ignition[1173]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:22:38.234507 systemd-networkd[1165]: eth0: Gained IPv6LL Mar 14 00:22:38.523446 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #4 Mar 14 00:22:38.523651 ignition[1173]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:22:40.125128 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #5 Mar 14 00:22:40.125312 ignition[1173]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:22:43.328393 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #6 Mar 14 00:22:43.328554 ignition[1173]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:22:45.745417 systemd-networkd[1165]: eth0: DHCPv4 address 172.31.30.82/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:22:48.329465 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #7 Mar 14 00:22:48.361887 ignition[1173]: PUT result: OK Mar 14 00:22:48.365855 ignition[1173]: parsed url from cmdline: "" Mar 14 00:22:48.365869 ignition[1173]: no config URL provided Mar 14 00:22:48.365879 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:22:48.365896 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:22:48.365919 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:48.366893 ignition[1173]: PUT result: OK Mar 14 00:22:48.366958 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:22:48.368963 ignition[1173]: GET result: OK Mar 14 00:22:48.369069 ignition[1173]: parsing config with SHA512: 651dcd2393642b84974f10267658170b6f8206cce4c5ca89153f0a848d70112f32c16436362581781c3cad97c3ebc4938c324a9fcf326083d4af14abfab23463 Mar 14 00:22:48.375471 unknown[1173]: fetched base config from "system" Mar 14 00:22:48.376563 ignition[1173]: fetch: fetch complete Mar 14 00:22:48.375492 unknown[1173]: fetched base config from "system" Mar 14 00:22:48.376578 ignition[1173]: fetch: fetch passed Mar 14 00:22:48.375500 unknown[1173]: fetched user config from "aws" Mar 14 00:22:48.376650 ignition[1173]: Ignition finished successfully Mar 14 00:22:48.379649 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:22:48.385540 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:22:48.403274 ignition[1180]: Ignition 2.19.0 Mar 14 00:22:48.403306 ignition[1180]: Stage: kargs Mar 14 00:22:48.403765 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:48.403778 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:48.403893 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:48.404921 ignition[1180]: PUT result: OK Mar 14 00:22:48.407517 ignition[1180]: kargs: kargs passed Mar 14 00:22:48.407608 ignition[1180]: Ignition finished successfully Mar 14 00:22:48.409474 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:22:48.415495 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:22:48.431206 ignition[1187]: Ignition 2.19.0 Mar 14 00:22:48.431225 ignition[1187]: Stage: disks Mar 14 00:22:48.431692 ignition[1187]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:48.431706 ignition[1187]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:48.431834 ignition[1187]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:48.432845 ignition[1187]: PUT result: OK Mar 14 00:22:48.435450 ignition[1187]: disks: disks passed Mar 14 00:22:48.435540 ignition[1187]: Ignition finished successfully Mar 14 00:22:48.437074 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:22:48.438127 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:48.438560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:22:48.439132 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:48.439752 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:48.440438 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:48.445484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:22:48.470303 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:22:48.475160 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:22:48.481430 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:22:48.591308 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:22:48.591975 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:22:48.593393 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:48.605509 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:48.609155 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:22:48.610403 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:22:48.610471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:22:48.610506 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:48.629313 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1215) Mar 14 00:22:48.630243 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:22:48.640989 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:48.641029 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:48.641051 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:22:48.642619 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:22:48.645401 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:22:48.649110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:48.858469 initrd-setup-root[1239]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:22:48.865870 initrd-setup-root[1246]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:22:48.872157 initrd-setup-root[1253]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:22:48.878043 initrd-setup-root[1260]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:22:49.098330 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:49.108452 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:22:49.114682 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:22:49.119920 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:22:49.124735 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:49.165927 ignition[1327]: INFO : Ignition 2.19.0 Mar 14 00:22:49.165927 ignition[1327]: INFO : Stage: mount Mar 14 00:22:49.167516 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:49.167516 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:49.167516 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:49.167571 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:22:49.170510 ignition[1327]: INFO : PUT result: OK Mar 14 00:22:49.171811 ignition[1327]: INFO : mount: mount passed Mar 14 00:22:49.172462 ignition[1327]: INFO : Ignition finished successfully Mar 14 00:22:49.173458 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:22:49.179432 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:22:49.597554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:22:49.617313 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1339) Mar 14 00:22:49.617391 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:22:49.620574 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:22:49.622495 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:22:49.630315 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:22:49.632714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:22:49.660858 ignition[1356]: INFO : Ignition 2.19.0 Mar 14 00:22:49.661675 ignition[1356]: INFO : Stage: files Mar 14 00:22:49.662240 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:49.662240 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:49.662240 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:49.663790 ignition[1356]: INFO : PUT result: OK Mar 14 00:22:49.665274 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:22:49.666426 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:22:49.666426 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:22:49.691140 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:22:49.692507 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:22:49.692507 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:22:49.691845 unknown[1356]: wrote ssh authorized keys file for user: core Mar 14 00:22:49.695222 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:49.695222 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:49.819180 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:22:50.027874 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:22:50.027874 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:22:50.029821 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:22:50.275577 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:22:50.450914 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:22:50.450914 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:50.453580 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 00:22:50.753545 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:22:51.348562 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 00:22:51.349891 ignition[1356]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:22:51.351034 ignition[1356]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:22:51.352252 ignition[1356]: INFO : files: files passed Mar 14 00:22:51.352252 ignition[1356]: INFO : Ignition finished successfully Mar 14 00:22:51.355070 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:22:51.365570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:22:51.370462 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:22:51.371572 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:22:51.371725 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:22:51.397468 initrd-setup-root-after-ignition[1384]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:51.399549 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:51.400739 initrd-setup-root-after-ignition[1384]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:22:51.400391 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:51.401735 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:22:51.409498 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:22:51.441545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:22:51.441662 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:22:51.442581 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:22:51.443355 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:22:51.444790 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:22:51.450485 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:22:51.464788 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:51.471475 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:22:51.484682 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:51.485529 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:51.486506 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:22:51.487407 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:22:51.487584 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:22:51.488870 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:22:51.489740 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:22:51.490551 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:22:51.491355 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:22:51.492250 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:22:51.493043 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:22:51.493834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:22:51.494665 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:22:51.495862 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:22:51.496736 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:22:51.497498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:22:51.497676 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:22:51.498810 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:51.499651 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:51.500489 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:22:51.501223 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:51.502561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:22:51.502750 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:22:51.504085 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:22:51.504298 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:22:51.505114 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:22:51.505268 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:22:51.515519 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:22:51.517368 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:22:51.517587 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:51.523642 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:22:51.524448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:22:51.524650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:51.526271 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:22:51.526456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:22:51.537194 ignition[1408]: INFO : Ignition 2.19.0 Mar 14 00:22:51.537194 ignition[1408]: INFO : Stage: umount Mar 14 00:22:51.538384 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:22:51.540334 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:22:51.540334 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:22:51.540334 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:22:51.540334 ignition[1408]: INFO : PUT result: OK Mar 14 00:22:51.538527 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:22:51.544308 ignition[1408]: INFO : umount: umount passed Mar 14 00:22:51.544308 ignition[1408]: INFO : Ignition finished successfully Mar 14 00:22:51.546629 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:22:51.546784 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:22:51.549140 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:22:51.549207 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:22:51.549971 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:22:51.550036 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:22:51.551523 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:22:51.551591 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:22:51.552886 systemd[1]: Stopped target network.target - Network. Mar 14 00:22:51.553379 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:22:51.553444 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:22:51.553792 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:22:51.554165 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:22:51.558769 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:51.560119 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:22:51.561188 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:22:51.562186 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:22:51.562243 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:22:51.563304 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:22:51.563368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:22:51.564385 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:22:51.564457 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:22:51.564978 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:22:51.565034 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:22:51.567581 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:22:51.570145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:22:51.571369 systemd-networkd[1165]: eth0: DHCPv6 lease lost Mar 14 00:22:51.572374 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:22:51.573193 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:22:51.573358 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:22:51.575820 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:22:51.576434 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:22:51.578801 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:22:51.578890 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:51.583393 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:22:51.584133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:22:51.584224 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:22:51.584934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:22:51.584993 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:51.586657 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:22:51.586717 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:51.587213 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:22:51.587263 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:51.589503 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:51.601706 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:22:51.601868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:51.603923 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:22:51.604158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:22:51.606062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:22:51.606142 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:51.606994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:22:51.607043 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:51.607723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:22:51.607787 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:22:51.608977 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:22:51.609039 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:22:51.610127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:22:51.610187 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:22:51.618831 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:22:51.620595 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:22:51.620695 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:51.624857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:22:51.624932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:51.628062 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:22:51.628206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:22:51.687782 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:22:51.687929 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:22:51.689378 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:22:51.689960 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:22:51.690035 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:22:51.696489 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:22:51.705645 systemd[1]: Switching root. Mar 14 00:22:51.736620 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Mar 14 00:22:51.736688 systemd-journald[179]: Journal stopped Mar 14 00:22:53.512343 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:22:53.512454 kernel: SELinux: policy capability open_perms=1 Mar 14 00:22:53.512478 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:22:53.512501 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:22:53.512521 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:22:53.512542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:22:53.512564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:22:53.512584 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:22:53.512612 kernel: audit: type=1403 audit(1773447772.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:22:53.512640 systemd[1]: Successfully loaded SELinux policy in 60.981ms. Mar 14 00:22:53.512682 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.892ms. Mar 14 00:22:53.512708 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:22:53.512732 systemd[1]: Detected virtualization amazon. Mar 14 00:22:53.512754 systemd[1]: Detected architecture x86-64. Mar 14 00:22:53.512776 systemd[1]: Detected first boot. Mar 14 00:22:53.512799 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:22:53.512820 zram_generator::config[1451]: No configuration found. Mar 14 00:22:53.512847 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:22:53.512868 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:22:53.512891 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:22:53.512911 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:22:53.512934 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:22:53.512955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:22:53.512976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:22:53.513004 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:22:53.513028 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:22:53.513051 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:22:53.513072 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:22:53.513093 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:22:53.513114 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:22:53.513135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:22:53.513160 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:22:53.513182 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:22:53.513205 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:22:53.513233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:22:53.513257 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:22:53.513304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:22:53.513324 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:22:53.513343 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:22:53.513364 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:22:53.513386 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:22:53.513413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:22:53.513435 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:22:53.513457 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:22:53.513479 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:22:53.513504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:22:53.513525 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:22:53.513546 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:22:53.513567 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:22:53.513588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:22:53.513609 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:22:53.513633 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:22:53.513655 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:22:53.513677 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:22:53.513699 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:53.513720 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:22:53.513741 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:22:53.513761 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:22:53.513784 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:22:53.513808 systemd[1]: Reached target machines.target - Containers. Mar 14 00:22:53.513830 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:22:53.513852 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:53.513873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:22:53.513895 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:22:53.513916 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:53.513936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:53.513958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:53.513975 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:22:53.513997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:53.514017 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:22:53.514035 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:22:53.514053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:22:53.514073 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:22:53.514093 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:22:53.514114 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:22:53.514133 kernel: fuse: init (API version 7.39) Mar 14 00:22:53.514160 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:22:53.514178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:22:53.514199 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:22:53.514221 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:22:53.514242 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:22:53.519829 systemd-journald[1534]: Collecting audit messages is disabled. Mar 14 00:22:53.519934 systemd[1]: Stopped verity-setup.service. Mar 14 00:22:53.519967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:53.519988 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:22:53.520008 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:22:53.520028 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:22:53.520047 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:22:53.520068 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:22:53.520092 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:22:53.520117 systemd-journald[1534]: Journal started Mar 14 00:22:53.520158 systemd-journald[1534]: Runtime Journal (/run/log/journal/ec2b8fc578d56b93b6c065a523fec2d2) is 4.7M, max 38.2M, 33.4M free. Mar 14 00:22:53.104650 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:22:53.133962 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:22:53.134459 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:22:53.527540 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:22:53.533998 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:22:53.536178 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:22:53.536427 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:22:53.537522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:53.537712 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:53.538790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:53.538953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:53.540120 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:22:53.540316 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:22:53.541497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:22:53.543805 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:22:53.544810 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:22:53.564329 kernel: ACPI: bus type drm_connector registered Mar 14 00:22:53.564946 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:22:53.579313 kernel: loop: module loaded Mar 14 00:22:53.575373 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:22:53.583419 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:22:53.587441 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:22:53.587495 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:22:53.593709 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:22:53.599475 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:22:53.607570 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:22:53.608484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:53.613496 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:22:53.617667 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:22:53.618740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:53.627655 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:22:53.631512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:22:53.644547 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:22:53.649759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:22:53.651552 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:53.651927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:53.654535 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:53.654738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:53.656146 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:22:53.657634 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:22:53.659661 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:22:53.671265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:53.680550 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:22:53.696178 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:22:53.697071 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:22:53.712370 kernel: loop0: detected capacity change from 0 to 228704 Mar 14 00:22:53.710738 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:22:53.719186 systemd-journald[1534]: Time spent on flushing to /var/log/journal/ec2b8fc578d56b93b6c065a523fec2d2 is 114.361ms for 1002 entries. Mar 14 00:22:53.719186 systemd-journald[1534]: System Journal (/var/log/journal/ec2b8fc578d56b93b6c065a523fec2d2) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:22:53.856453 systemd-journald[1534]: Received client request to flush runtime journal. Mar 14 00:22:53.856528 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:22:53.796157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:22:53.797465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:22:53.798651 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:22:53.819613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:22:53.832502 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:22:53.861877 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:22:53.866200 udevadm[1598]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 00:22:53.885270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:22:53.887568 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:22:53.909316 kernel: loop1: detected capacity change from 0 to 140768 Mar 14 00:22:53.916500 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Mar 14 00:22:53.916530 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Mar 14 00:22:53.924869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:22:54.010433 kernel: loop2: detected capacity change from 0 to 61336 Mar 14 00:22:54.086582 kernel: loop3: detected capacity change from 0 to 142488 Mar 14 00:22:54.210302 kernel: loop4: detected capacity change from 0 to 228704 Mar 14 00:22:54.255320 kernel: loop5: detected capacity change from 0 to 140768 Mar 14 00:22:54.278330 kernel: loop6: detected capacity change from 0 to 61336 Mar 14 00:22:54.296313 kernel: loop7: detected capacity change from 0 to 142488 Mar 14 00:22:54.330931 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:22:54.331646 (sd-merge)[1609]: Merged extensions into '/usr'. Mar 14 00:22:54.339598 systemd[1]: Reloading requested from client PID 1577 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:22:54.339807 systemd[1]: Reloading... Mar 14 00:22:54.488366 zram_generator::config[1635]: No configuration found. Mar 14 00:22:54.735077 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:54.802920 systemd[1]: Reloading finished in 462 ms. Mar 14 00:22:54.834378 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:22:54.835201 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:22:54.845551 systemd[1]: Starting ensure-sysext.service... Mar 14 00:22:54.847763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:22:54.852776 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:22:54.867739 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:22:54.867890 systemd[1]: Reloading... Mar 14 00:22:54.918818 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:22:54.919376 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:22:54.920237 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Mar 14 00:22:54.922253 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:22:54.922847 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Mar 14 00:22:54.922941 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Mar 14 00:22:54.931194 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:54.931406 systemd-tmpfiles[1688]: Skipping /boot Mar 14 00:22:54.949804 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:22:54.949977 systemd-tmpfiles[1688]: Skipping /boot Mar 14 00:22:55.004323 zram_generator::config[1717]: No configuration found. Mar 14 00:22:55.175456 (udev-worker)[1736]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:22:55.188622 ldconfig[1572]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:22:55.256319 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 00:22:55.268321 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 14 00:22:55.271311 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:22:55.274306 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 14 00:22:55.281342 kernel: ACPI: button: Sleep Button [SLPF] Mar 14 00:22:55.352321 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Mar 14 00:22:55.366560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:22:55.398313 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:22:55.422845 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1739) Mar 14 00:22:55.543193 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:22:55.543458 systemd[1]: Reloading finished in 674 ms. Mar 14 00:22:55.565148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:22:55.567725 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:22:55.570893 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:22:55.614972 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:22:55.621838 systemd[1]: Finished ensure-sysext.service. Mar 14 00:22:55.642587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:22:55.643319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:55.648536 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:22:55.656600 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:22:55.659510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:22:55.664526 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:22:55.670540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:22:55.676555 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:22:55.679603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:22:55.686556 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:22:55.687402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:22:55.701658 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:22:55.702387 lvm[1888]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:55.712520 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:22:55.721649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:22:55.726645 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:22:55.728624 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:22:55.741795 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:22:55.753701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:22:55.756804 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:22:55.759582 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:22:55.759816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:22:55.766800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:22:55.767016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:22:55.768085 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:22:55.768273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:22:55.778411 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:22:55.783588 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:22:55.784771 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:22:55.786086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:22:55.786490 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:22:55.794373 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:22:55.805047 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:22:55.812387 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:22:55.813738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:22:55.821113 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:22:55.847171 lvm[1915]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:22:55.861795 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:22:55.872684 augenrules[1921]: No rules Mar 14 00:22:55.876509 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:22:55.879113 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:22:55.889443 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:22:55.890427 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:22:55.905060 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:22:55.907327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:22:55.929711 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:22:55.990376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:22:56.014089 systemd-networkd[1898]: lo: Link UP Mar 14 00:22:56.014111 systemd-networkd[1898]: lo: Gained carrier Mar 14 00:22:56.016090 systemd-networkd[1898]: Enumeration completed Mar 14 00:22:56.016743 systemd-networkd[1898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:56.016753 systemd-networkd[1898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:22:56.018390 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:22:56.022598 systemd-networkd[1898]: eth0: Link UP Mar 14 00:22:56.022989 systemd-networkd[1898]: eth0: Gained carrier Mar 14 00:22:56.023019 systemd-networkd[1898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:22:56.029691 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:22:56.030842 systemd-resolved[1899]: Positive Trust Anchors: Mar 14 00:22:56.031262 systemd-resolved[1899]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:22:56.031336 systemd-resolved[1899]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:22:56.038510 systemd-networkd[1898]: eth0: DHCPv4 address 172.31.30.82/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:22:56.038981 systemd-resolved[1899]: Defaulting to hostname 'linux'. Mar 14 00:22:56.041724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:22:56.042522 systemd[1]: Reached target network.target - Network. Mar 14 00:22:56.043096 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:22:56.043564 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:22:56.044108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:22:56.044606 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:22:56.045164 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:22:56.045676 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:22:56.046062 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:22:56.046508 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:22:56.046556 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:22:56.046959 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:22:56.048373 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:22:56.050214 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:22:56.058586 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:22:56.059823 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:22:56.060509 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:22:56.060960 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:22:56.061428 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:56.061469 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:22:56.062664 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:22:56.070620 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:22:56.074231 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:22:56.078419 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:22:56.080475 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:22:56.081369 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:22:56.091988 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:22:56.100187 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:22:56.123447 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:22:56.127440 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:22:56.131837 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:22:56.141554 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:22:56.148513 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:22:56.150848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:22:56.152560 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:22:56.159544 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:22:56.164267 jq[1948]: false Mar 14 00:22:56.163445 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:22:56.169812 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:22:56.170369 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:22:56.234749 (ntainerd)[1970]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:22:56.254025 jq[1961]: true Mar 14 00:22:56.264794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:22:56.265066 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:22:56.285681 extend-filesystems[1949]: Found loop4 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found loop5 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found loop6 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found loop7 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p1 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p2 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p3 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found usr Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p4 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p6 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p7 Mar 14 00:22:56.292359 extend-filesystems[1949]: Found nvme0n1p9 Mar 14 00:22:56.292359 extend-filesystems[1949]: Checking size of /dev/nvme0n1p9 Mar 14 00:22:56.330450 jq[1976]: true Mar 14 00:22:56.316461 dbus-daemon[1947]: [system] SELinux support is enabled Mar 14 00:22:56.331012 tar[1963]: linux-amd64/LICENSE Mar 14 00:22:56.331012 tar[1963]: linux-amd64/helm Mar 14 00:22:56.321406 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:22:56.336424 update_engine[1960]: I20260314 00:22:56.330433 1960 main.cc:92] Flatcar Update Engine starting Mar 14 00:22:56.339537 extend-filesystems[1949]: Resized partition /dev/nvme0n1p9 Mar 14 00:22:56.338837 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: ---------------------------------------------------- Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: corporation. Support and training for ntp-4 are Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: available at https://www.nwtime.org/support Mar 14 00:22:56.343773 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: ---------------------------------------------------- Mar 14 00:22:56.338343 ntpd[1951]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:53:10 UTC 2026 (1): Starting Mar 14 00:22:56.338909 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:22:56.349749 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: proto: precision = 0.076 usec (-24) Mar 14 00:22:56.349749 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: basedate set to 2026-03-01 Mar 14 00:22:56.349749 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: gps base set to 2026-03-01 (week 2408) Mar 14 00:22:56.349844 extend-filesystems[1990]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:22:56.355463 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:22:56.338368 ntpd[1951]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:22:56.342864 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:22:56.338379 ntpd[1951]: ---------------------------------------------------- Mar 14 00:22:56.342894 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:22:56.338388 ntpd[1951]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:22:56.338398 ntpd[1951]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:22:56.338407 ntpd[1951]: corporation. Support and training for ntp-4 are Mar 14 00:22:56.338416 ntpd[1951]: available at https://www.nwtime.org/support Mar 14 00:22:56.338425 ntpd[1951]: ---------------------------------------------------- Mar 14 00:22:56.347525 ntpd[1951]: proto: precision = 0.076 usec (-24) Mar 14 00:22:56.348424 ntpd[1951]: basedate set to 2026-03-01 Mar 14 00:22:56.348444 ntpd[1951]: gps base set to 2026-03-01 (week 2408) Mar 14 00:22:56.348566 dbus-daemon[1947]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1898 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:22:56.357021 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:22:56.357255 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:22:56.357255 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:22:56.357086 ntpd[1951]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:22:56.357542 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Listen normally on 3 eth0 172.31.30.82:123 Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Listen normally on 4 lo [::1]:123 Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: bind(21) AF_INET6 fe80::4ee:38ff:fe1b:6831%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: unable to create socket on eth0 (5) for fe80::4ee:38ff:fe1b:6831%2#123 Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: failed to init interface for address fe80::4ee:38ff:fe1b:6831%2 Mar 14 00:22:56.359510 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Mar 14 00:22:56.357595 ntpd[1951]: Listen normally on 3 eth0 172.31.30.82:123 Mar 14 00:22:56.357637 ntpd[1951]: Listen normally on 4 lo [::1]:123 Mar 14 00:22:56.357687 ntpd[1951]: bind(21) AF_INET6 fe80::4ee:38ff:fe1b:6831%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:22:56.357709 ntpd[1951]: unable to create socket on eth0 (5) for fe80::4ee:38ff:fe1b:6831%2#123 Mar 14 00:22:56.357722 ntpd[1951]: failed to init interface for address fe80::4ee:38ff:fe1b:6831%2 Mar 14 00:22:56.357754 ntpd[1951]: Listening on routing socket on fd #21 for interface updates Mar 14 00:22:56.367614 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:22:56.367300 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:22:56.369557 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:22:56.369930 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:56.370270 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:56.370270 ntpd[1951]: 14 Mar 00:22:56 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:56.369969 ntpd[1951]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:22:56.386546 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:22:56.389825 coreos-metadata[1946]: Mar 14 00:22:56.389 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:22:56.390988 coreos-metadata[1946]: Mar 14 00:22:56.390 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:22:56.392567 coreos-metadata[1946]: Mar 14 00:22:56.392 INFO Fetch successful Mar 14 00:22:56.392567 coreos-metadata[1946]: Mar 14 00:22:56.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:22:56.393947 coreos-metadata[1946]: Mar 14 00:22:56.393 INFO Fetch successful Mar 14 00:22:56.393947 coreos-metadata[1946]: Mar 14 00:22:56.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:22:56.399466 coreos-metadata[1946]: Mar 14 00:22:56.394 INFO Fetch successful Mar 14 00:22:56.399466 coreos-metadata[1946]: Mar 14 00:22:56.394 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:22:56.404386 coreos-metadata[1946]: Mar 14 00:22:56.402 INFO Fetch successful Mar 14 00:22:56.404386 coreos-metadata[1946]: Mar 14 00:22:56.402 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:22:56.404386 coreos-metadata[1946]: Mar 14 00:22:56.403 INFO Fetch failed with 404: resource not found Mar 14 00:22:56.404386 coreos-metadata[1946]: Mar 14 00:22:56.403 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:22:56.407407 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:22:56.408046 update_engine[1960]: I20260314 00:22:56.407778 1960 update_check_scheduler.cc:74] Next update check in 9m26s Mar 14 00:22:56.412422 coreos-metadata[1946]: Mar 14 00:22:56.412 INFO Fetch successful Mar 14 00:22:56.412519 coreos-metadata[1946]: Mar 14 00:22:56.412 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:22:56.413312 coreos-metadata[1946]: Mar 14 00:22:56.413 INFO Fetch successful Mar 14 00:22:56.413312 coreos-metadata[1946]: Mar 14 00:22:56.413 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:22:56.414068 systemd-logind[1958]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:22:56.418858 coreos-metadata[1946]: Mar 14 00:22:56.414 INFO Fetch successful Mar 14 00:22:56.418858 coreos-metadata[1946]: Mar 14 00:22:56.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:22:56.418858 coreos-metadata[1946]: Mar 14 00:22:56.414 INFO Fetch successful Mar 14 00:22:56.418858 coreos-metadata[1946]: Mar 14 00:22:56.414 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:22:56.414105 systemd-logind[1958]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 14 00:22:56.414130 systemd-logind[1958]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:22:56.415450 systemd-logind[1958]: New seat seat0. Mar 14 00:22:56.417468 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:22:56.418058 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:22:56.420735 coreos-metadata[1946]: Mar 14 00:22:56.420 INFO Fetch successful Mar 14 00:22:56.511120 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:22:56.554311 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:22:56.575689 extend-filesystems[1990]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:22:56.575689 extend-filesystems[1990]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:22:56.575689 extend-filesystems[1990]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:22:56.584664 extend-filesystems[1949]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:22:56.643704 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:22:56.647131 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:22:56.647439 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:22:56.656663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:22:56.659670 bash[2028]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:22:56.661958 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:22:56.670719 systemd[1]: Starting sshkeys.service... Mar 14 00:22:56.721152 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1738) Mar 14 00:22:56.721444 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:22:56.727826 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:22:56.930791 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:22:56.931411 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:22:56.933733 coreos-metadata[2048]: Mar 14 00:22:56.933 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:22:56.939048 coreos-metadata[2048]: Mar 14 00:22:56.937 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:22:56.939048 coreos-metadata[2048]: Mar 14 00:22:56.938 INFO Fetch successful Mar 14 00:22:56.939048 coreos-metadata[2048]: Mar 14 00:22:56.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:22:56.940945 coreos-metadata[2048]: Mar 14 00:22:56.940 INFO Fetch successful Mar 14 00:22:56.941180 dbus-daemon[1947]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1998 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:22:56.950393 unknown[2048]: wrote ssh authorized keys file for user: core Mar 14 00:22:56.953734 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:22:57.000446 polkitd[2117]: Started polkitd version 121 Mar 14 00:22:57.021022 update-ssh-keys[2118]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:22:57.027499 locksmithd[1999]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:22:57.029781 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:22:57.044267 systemd[1]: Finished sshkeys.service. Mar 14 00:22:57.050852 polkitd[2117]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:22:57.058006 polkitd[2117]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:22:57.063863 polkitd[2117]: Finished loading, compiling and executing 2 rules Mar 14 00:22:57.071364 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:22:57.077548 dbus-daemon[1947]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:22:57.078889 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:22:57.079029 polkitd[2117]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:22:57.149109 systemd-hostnamed[1998]: Hostname set to (transient) Mar 14 00:22:57.150535 systemd-resolved[1899]: System hostname changed to 'ip-172-31-30-82'. Mar 14 00:22:57.163094 sshd_keygen[1972]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:22:57.221275 containerd[1970]: time="2026-03-14T00:22:57.221127048Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:22:57.249662 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:22:57.265760 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:22:57.270673 systemd[1]: Started sshd@0-172.31.30.82:22-68.220.241.50:58312.service - OpenSSH per-connection server daemon (68.220.241.50:58312). Mar 14 00:22:57.297784 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:22:57.298075 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:22:57.315398 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:22:57.322528 containerd[1970]: time="2026-03-14T00:22:57.322216582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.325316 containerd[1970]: time="2026-03-14T00:22:57.324964091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:57.325316 containerd[1970]: time="2026-03-14T00:22:57.325015047Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:22:57.325316 containerd[1970]: time="2026-03-14T00:22:57.325043161Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:22:57.325316 containerd[1970]: time="2026-03-14T00:22:57.325240043Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:22:57.325316 containerd[1970]: time="2026-03-14T00:22:57.325263172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.325646628Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.325673775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.325910513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.325933082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.325952713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.325969781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.326064267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326389 containerd[1970]: time="2026-03-14T00:22:57.326346507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326889 containerd[1970]: time="2026-03-14T00:22:57.326865707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:22:57.326967 containerd[1970]: time="2026-03-14T00:22:57.326952816Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:22:57.327126 containerd[1970]: time="2026-03-14T00:22:57.327110274Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:22:57.327255 containerd[1970]: time="2026-03-14T00:22:57.327239702Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:22:57.332252 containerd[1970]: time="2026-03-14T00:22:57.332175476Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:22:57.332252 containerd[1970]: time="2026-03-14T00:22:57.332269405Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:22:57.332252 containerd[1970]: time="2026-03-14T00:22:57.332308584Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:22:57.332252 containerd[1970]: time="2026-03-14T00:22:57.332353511Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.332380182Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.332582169Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.332925424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333068328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333092435Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333112297Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333134022Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333153320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333172595Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333194169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333218541Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333252955Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333273 containerd[1970]: time="2026-03-14T00:22:57.333270248Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333301808Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333330184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333351661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333369979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333388681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333407161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333426701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333444788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333470296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333490413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333512048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333531267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333549532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333568836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.333759 containerd[1970]: time="2026-03-14T00:22:57.333592160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333624920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333643358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333668081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333720164Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333746482Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333763852Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333782580Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333798048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333844190Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333859738Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:22:57.334297 containerd[1970]: time="2026-03-14T00:22:57.333875315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:22:57.340763 ntpd[1951]: 14 Mar 00:22:57 ntpd[1951]: bind(24) AF_INET6 fe80::4ee:38ff:fe1b:6831%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:22:57.340763 ntpd[1951]: 14 Mar 00:22:57 ntpd[1951]: unable to create socket on eth0 (6) for fe80::4ee:38ff:fe1b:6831%2#123 Mar 14 00:22:57.340763 ntpd[1951]: 14 Mar 00:22:57 ntpd[1951]: failed to init interface for address fe80::4ee:38ff:fe1b:6831%2 Mar 14 00:22:57.337211 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:22:57.338793 ntpd[1951]: bind(24) AF_INET6 fe80::4ee:38ff:fe1b:6831%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.334355022Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.334443978Z" level=info msg="Connect containerd service" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.334505275Z" level=info msg="using legacy CRI server" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.334516773Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.334811435Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.335823074Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.336577977Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.336626246Z" level=info msg="Start subscribing containerd event" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.336768811Z" level=info msg="Start recovering state" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.336656096Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.336977101Z" level=info msg="Start event monitor" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.336997791Z" level=info msg="Start snapshots syncer" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.337011983Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.337028947Z" level=info msg="Start streaming server" Mar 14 00:22:57.341214 containerd[1970]: time="2026-03-14T00:22:57.337933266Z" level=info msg="containerd successfully booted in 0.118983s" Mar 14 00:22:57.338831 ntpd[1951]: unable to create socket on eth0 (6) for fe80::4ee:38ff:fe1b:6831%2#123 Mar 14 00:22:57.338853 ntpd[1951]: failed to init interface for address fe80::4ee:38ff:fe1b:6831%2 Mar 14 00:22:57.358887 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:22:57.370688 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:22:57.379494 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:22:57.380483 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:22:57.649334 tar[1963]: linux-amd64/README.md Mar 14 00:22:57.664880 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:22:57.785320 sshd[2158]: Accepted publickey for core from 68.220.241.50 port 58312 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:57.787662 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:57.797848 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:22:57.804701 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:22:57.808764 systemd-logind[1958]: New session 1 of user core. Mar 14 00:22:57.821564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:22:57.832713 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:22:57.837124 (systemd)[2174]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:22:57.884403 systemd-networkd[1898]: eth0: Gained IPv6LL Mar 14 00:22:57.888450 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:22:57.890804 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:22:57.901607 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:22:57.913576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:22:57.918632 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:22:57.987484 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:22:58.025315 amazon-ssm-agent[2181]: Initializing new seelog logger Mar 14 00:22:58.025315 amazon-ssm-agent[2181]: New Seelog Logger Creation Complete Mar 14 00:22:58.025315 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.025315 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.025315 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 processing appconfig overrides Mar 14 00:22:58.026810 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO Proxy environment variables: Mar 14 00:22:58.027014 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.027096 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.027332 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 processing appconfig overrides Mar 14 00:22:58.028015 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.028097 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.028261 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 processing appconfig overrides Mar 14 00:22:58.031060 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.031223 amazon-ssm-agent[2181]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:22:58.031443 amazon-ssm-agent[2181]: 2026/03/14 00:22:58 processing appconfig overrides Mar 14 00:22:58.048110 systemd[2174]: Queued start job for default target default.target. Mar 14 00:22:58.056434 systemd[2174]: Created slice app.slice - User Application Slice. Mar 14 00:22:58.056484 systemd[2174]: Reached target paths.target - Paths. Mar 14 00:22:58.056514 systemd[2174]: Reached target timers.target - Timers. Mar 14 00:22:58.065924 systemd[2174]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:22:58.092138 systemd[2174]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:22:58.092329 systemd[2174]: Reached target sockets.target - Sockets. Mar 14 00:22:58.092359 systemd[2174]: Reached target basic.target - Basic System. Mar 14 00:22:58.092412 systemd[2174]: Reached target default.target - Main User Target. Mar 14 00:22:58.092452 systemd[2174]: Startup finished in 247ms. Mar 14 00:22:58.097513 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:22:58.106865 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:22:58.128194 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO https_proxy: Mar 14 00:22:58.227199 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO http_proxy: Mar 14 00:22:58.325310 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO no_proxy: Mar 14 00:22:58.426809 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:22:58.445401 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO Agent will take identity from EC2 Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [Registrar] Starting registrar module Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:22:58.445546 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:22:58.446580 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:22:58.446580 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:22:58.473667 systemd[1]: Started sshd@1-172.31.30.82:22-68.220.241.50:58316.service - OpenSSH per-connection server daemon (68.220.241.50:58316). Mar 14 00:22:58.524420 amazon-ssm-agent[2181]: 2026-03-14 00:22:58 INFO [CredentialRefresher] Next credential rotation will be in 32.24165718578333 minutes Mar 14 00:22:58.968959 sshd[2204]: Accepted publickey for core from 68.220.241.50 port 58316 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:58.971403 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:58.977567 systemd-logind[1958]: New session 2 of user core. Mar 14 00:22:58.988552 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:22:59.322393 sshd[2204]: pam_unix(sshd:session): session closed for user core Mar 14 00:22:59.327051 systemd[1]: sshd@1-172.31.30.82:22-68.220.241.50:58316.service: Deactivated successfully. Mar 14 00:22:59.330234 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:22:59.332640 systemd-logind[1958]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:22:59.334200 systemd-logind[1958]: Removed session 2. Mar 14 00:22:59.410181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:22:59.412856 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:22:59.421797 systemd[1]: Started sshd@2-172.31.30.82:22-68.220.241.50:58326.service - OpenSSH per-connection server daemon (68.220.241.50:58326). Mar 14 00:22:59.423656 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:22:59.424269 systemd[1]: Startup finished in 674ms (kernel) + 18.509s (initrd) + 7.248s (userspace) = 26.431s. Mar 14 00:22:59.466465 amazon-ssm-agent[2181]: 2026-03-14 00:22:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:22:59.567452 amazon-ssm-agent[2181]: 2026-03-14 00:22:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2224) started Mar 14 00:22:59.667667 amazon-ssm-agent[2181]: 2026-03-14 00:22:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:22:59.938464 sshd[2217]: Accepted publickey for core from 68.220.241.50 port 58326 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:22:59.939941 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:22:59.966022 systemd-logind[1958]: New session 3 of user core. Mar 14 00:22:59.973405 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:23:00.292600 sshd[2217]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:00.299007 systemd[1]: sshd@2-172.31.30.82:22-68.220.241.50:58326.service: Deactivated successfully. Mar 14 00:23:00.299204 systemd-logind[1958]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:23:00.302802 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:23:00.304330 systemd-logind[1958]: Removed session 3. Mar 14 00:23:00.338847 ntpd[1951]: Listen normally on 7 eth0 [fe80::4ee:38ff:fe1b:6831%2]:123 Mar 14 00:23:00.339667 ntpd[1951]: 14 Mar 00:23:00 ntpd[1951]: Listen normally on 7 eth0 [fe80::4ee:38ff:fe1b:6831%2]:123 Mar 14 00:23:00.556657 kubelet[2215]: E0314 00:23:00.556455 2215 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:00.559564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:00.559766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:00.560524 systemd[1]: kubelet.service: Consumed 1.089s CPU time. Mar 14 00:23:03.642680 systemd-resolved[1899]: Clock change detected. Flushing caches. Mar 14 00:23:10.682003 systemd[1]: Started sshd@3-172.31.30.82:22-68.220.241.50:52764.service - OpenSSH per-connection server daemon (68.220.241.50:52764). Mar 14 00:23:11.034952 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:23:11.040777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:11.181828 sshd[2246]: Accepted publickey for core from 68.220.241.50 port 52764 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:11.182753 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:11.191855 systemd-logind[1958]: New session 4 of user core. Mar 14 00:23:11.195457 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:23:11.272327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:11.274346 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:11.331285 kubelet[2257]: E0314 00:23:11.331078 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:11.335613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:11.335817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:11.530243 sshd[2246]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:11.533977 systemd[1]: sshd@3-172.31.30.82:22-68.220.241.50:52764.service: Deactivated successfully. Mar 14 00:23:11.535998 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:23:11.537999 systemd-logind[1958]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:23:11.539582 systemd-logind[1958]: Removed session 4. Mar 14 00:23:11.621460 systemd[1]: Started sshd@4-172.31.30.82:22-68.220.241.50:52772.service - OpenSSH per-connection server daemon (68.220.241.50:52772). Mar 14 00:23:12.120138 sshd[2269]: Accepted publickey for core from 68.220.241.50 port 52772 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:12.121681 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:12.127291 systemd-logind[1958]: New session 5 of user core. Mar 14 00:23:12.134475 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:23:12.471341 sshd[2269]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:12.475877 systemd-logind[1958]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:23:12.477197 systemd[1]: sshd@4-172.31.30.82:22-68.220.241.50:52772.service: Deactivated successfully. Mar 14 00:23:12.479234 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:23:12.480377 systemd-logind[1958]: Removed session 5. Mar 14 00:23:12.557687 systemd[1]: Started sshd@5-172.31.30.82:22-68.220.241.50:36122.service - OpenSSH per-connection server daemon (68.220.241.50:36122). Mar 14 00:23:13.052154 sshd[2276]: Accepted publickey for core from 68.220.241.50 port 36122 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:13.052854 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:13.058253 systemd-logind[1958]: New session 6 of user core. Mar 14 00:23:13.065419 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:23:13.401381 sshd[2276]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:13.406230 systemd-logind[1958]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:23:13.413763 systemd[1]: sshd@5-172.31.30.82:22-68.220.241.50:36122.service: Deactivated successfully. Mar 14 00:23:13.419519 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:23:13.423370 systemd-logind[1958]: Removed session 6. Mar 14 00:23:13.491497 systemd[1]: Started sshd@6-172.31.30.82:22-68.220.241.50:36128.service - OpenSSH per-connection server daemon (68.220.241.50:36128). Mar 14 00:23:13.972390 sshd[2283]: Accepted publickey for core from 68.220.241.50 port 36128 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:13.973962 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:13.979200 systemd-logind[1958]: New session 7 of user core. Mar 14 00:23:13.990515 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:23:14.259016 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:23:14.259456 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:14.277078 sudo[2286]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:14.353971 sshd[2283]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:14.358820 systemd-logind[1958]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:23:14.359696 systemd[1]: sshd@6-172.31.30.82:22-68.220.241.50:36128.service: Deactivated successfully. Mar 14 00:23:14.361804 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:23:14.362886 systemd-logind[1958]: Removed session 7. Mar 14 00:23:14.447495 systemd[1]: Started sshd@7-172.31.30.82:22-68.220.241.50:36144.service - OpenSSH per-connection server daemon (68.220.241.50:36144). Mar 14 00:23:14.940714 sshd[2291]: Accepted publickey for core from 68.220.241.50 port 36144 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:14.942585 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:14.948213 systemd-logind[1958]: New session 8 of user core. Mar 14 00:23:14.957404 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:23:15.215636 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:23:15.216400 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:15.220954 sudo[2295]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:15.226941 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:23:15.227389 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:15.241538 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:23:15.245675 auditctl[2298]: No rules Mar 14 00:23:15.246147 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:23:15.246505 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:23:15.249561 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:23:15.288238 augenrules[2316]: No rules Mar 14 00:23:15.289633 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:23:15.292273 sudo[2294]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:15.369006 sshd[2291]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:15.372562 systemd[1]: sshd@7-172.31.30.82:22-68.220.241.50:36144.service: Deactivated successfully. Mar 14 00:23:15.374696 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:23:15.376249 systemd-logind[1958]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:23:15.377526 systemd-logind[1958]: Removed session 8. Mar 14 00:23:15.463465 systemd[1]: Started sshd@8-172.31.30.82:22-68.220.241.50:36158.service - OpenSSH per-connection server daemon (68.220.241.50:36158). Mar 14 00:23:15.951764 sshd[2324]: Accepted publickey for core from 68.220.241.50 port 36158 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:23:15.953451 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:23:15.959500 systemd-logind[1958]: New session 9 of user core. Mar 14 00:23:15.966461 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:23:16.228698 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:23:16.229126 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:23:16.617442 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:23:16.617644 (dockerd)[2344]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:23:16.993466 dockerd[2344]: time="2026-03-14T00:23:16.993317535Z" level=info msg="Starting up" Mar 14 00:23:17.158726 dockerd[2344]: time="2026-03-14T00:23:17.158653717Z" level=info msg="Loading containers: start." Mar 14 00:23:17.309140 kernel: Initializing XFRM netlink socket Mar 14 00:23:17.338483 (udev-worker)[2368]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:23:17.408019 systemd-networkd[1898]: docker0: Link UP Mar 14 00:23:17.432929 dockerd[2344]: time="2026-03-14T00:23:17.432882661Z" level=info msg="Loading containers: done." Mar 14 00:23:17.458151 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3953561851-merged.mount: Deactivated successfully. Mar 14 00:23:17.484631 dockerd[2344]: time="2026-03-14T00:23:17.484564647Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:23:17.484839 dockerd[2344]: time="2026-03-14T00:23:17.484703508Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:23:17.484890 dockerd[2344]: time="2026-03-14T00:23:17.484850258Z" level=info msg="Daemon has completed initialization" Mar 14 00:23:17.540298 dockerd[2344]: time="2026-03-14T00:23:17.539618966Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:23:17.539741 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:23:18.243626 containerd[1970]: time="2026-03-14T00:23:18.243574644Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:23:18.841492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204126051.mount: Deactivated successfully. Mar 14 00:23:20.608685 containerd[1970]: time="2026-03-14T00:23:20.608628170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:20.610057 containerd[1970]: time="2026-03-14T00:23:20.609988573Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 14 00:23:20.611124 containerd[1970]: time="2026-03-14T00:23:20.610981856Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:20.614103 containerd[1970]: time="2026-03-14T00:23:20.614029420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:20.616058 containerd[1970]: time="2026-03-14T00:23:20.615461807Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.37183146s" Mar 14 00:23:20.616058 containerd[1970]: time="2026-03-14T00:23:20.615507032Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 00:23:20.616294 containerd[1970]: time="2026-03-14T00:23:20.616271352Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:23:21.535052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:23:21.540386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:21.794481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:21.797639 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:21.865508 kubelet[2554]: E0314 00:23:21.865373 2554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:21.868228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:21.868422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:22.625149 containerd[1970]: time="2026-03-14T00:23:22.625067534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:22.627119 containerd[1970]: time="2026-03-14T00:23:22.627008937Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 14 00:23:22.629447 containerd[1970]: time="2026-03-14T00:23:22.629385347Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:22.633997 containerd[1970]: time="2026-03-14T00:23:22.633773905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:22.636206 containerd[1970]: time="2026-03-14T00:23:22.635913699Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.019604253s" Mar 14 00:23:22.636206 containerd[1970]: time="2026-03-14T00:23:22.635967238Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 00:23:22.640102 containerd[1970]: time="2026-03-14T00:23:22.638721985Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:23:24.155374 containerd[1970]: time="2026-03-14T00:23:24.155308255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:24.156753 containerd[1970]: time="2026-03-14T00:23:24.156692194Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 14 00:23:24.157746 containerd[1970]: time="2026-03-14T00:23:24.157685830Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:24.161354 containerd[1970]: time="2026-03-14T00:23:24.160822817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:24.162362 containerd[1970]: time="2026-03-14T00:23:24.162318529Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.523556084s" Mar 14 00:23:24.162447 containerd[1970]: time="2026-03-14T00:23:24.162367631Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 00:23:24.163310 containerd[1970]: time="2026-03-14T00:23:24.163267575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:23:25.286525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623706695.mount: Deactivated successfully. Mar 14 00:23:25.931035 containerd[1970]: time="2026-03-14T00:23:25.930981752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.932123 containerd[1970]: time="2026-03-14T00:23:25.932013836Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 14 00:23:25.933394 containerd[1970]: time="2026-03-14T00:23:25.933358040Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.937128 containerd[1970]: time="2026-03-14T00:23:25.936890911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:25.937981 containerd[1970]: time="2026-03-14T00:23:25.937711385Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 1.774400338s" Mar 14 00:23:25.937981 containerd[1970]: time="2026-03-14T00:23:25.937747783Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 00:23:25.938770 containerd[1970]: time="2026-03-14T00:23:25.938722862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:23:26.473484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3259141365.mount: Deactivated successfully. Mar 14 00:23:27.457992 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:23:27.736603 containerd[1970]: time="2026-03-14T00:23:27.736472186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:27.737989 containerd[1970]: time="2026-03-14T00:23:27.737935554Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 14 00:23:27.739989 containerd[1970]: time="2026-03-14T00:23:27.739952583Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:27.744649 containerd[1970]: time="2026-03-14T00:23:27.744603882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:27.747168 containerd[1970]: time="2026-03-14T00:23:27.746243727Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.807466259s" Mar 14 00:23:27.747168 containerd[1970]: time="2026-03-14T00:23:27.746306301Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 00:23:27.747168 containerd[1970]: time="2026-03-14T00:23:27.746933163Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:23:28.238928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973560407.mount: Deactivated successfully. Mar 14 00:23:28.245809 containerd[1970]: time="2026-03-14T00:23:28.245760090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:28.246991 containerd[1970]: time="2026-03-14T00:23:28.246911987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 14 00:23:28.251116 containerd[1970]: time="2026-03-14T00:23:28.250035940Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:28.254931 containerd[1970]: time="2026-03-14T00:23:28.254887359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:28.255874 containerd[1970]: time="2026-03-14T00:23:28.255839763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.875366ms" Mar 14 00:23:28.256018 containerd[1970]: time="2026-03-14T00:23:28.255996097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 00:23:28.256588 containerd[1970]: time="2026-03-14T00:23:28.256546781Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:23:28.772242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194629531.mount: Deactivated successfully. Mar 14 00:23:29.952707 containerd[1970]: time="2026-03-14T00:23:29.952645026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:29.954306 containerd[1970]: time="2026-03-14T00:23:29.954244569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 14 00:23:29.955476 containerd[1970]: time="2026-03-14T00:23:29.955182553Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:29.959053 containerd[1970]: time="2026-03-14T00:23:29.958529036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:23:29.959923 containerd[1970]: time="2026-03-14T00:23:29.959881322Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.70328466s" Mar 14 00:23:29.960010 containerd[1970]: time="2026-03-14T00:23:29.959930264Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 00:23:32.034966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:23:32.045221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:32.326595 (kubelet)[2724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:23:32.327289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:32.391144 kubelet[2724]: E0314 00:23:32.388164 2724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:23:32.391379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:23:32.391563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:23:33.377762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:33.386451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:33.429810 systemd[1]: Reloading requested from client PID 2739 ('systemctl') (unit session-9.scope)... Mar 14 00:23:33.429994 systemd[1]: Reloading... Mar 14 00:23:33.584968 zram_generator::config[2779]: No configuration found. Mar 14 00:23:33.731307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:33.816848 systemd[1]: Reloading finished in 386 ms. Mar 14 00:23:33.872840 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:23:33.872944 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:23:33.873316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:33.885729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:34.109825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:34.120903 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:23:34.172065 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:34.172434 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:23:34.172478 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:34.172616 kubelet[2843]: I0314 00:23:34.172588 2843 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:23:35.254124 kubelet[2843]: I0314 00:23:35.252517 2843 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:23:35.254124 kubelet[2843]: I0314 00:23:35.252690 2843 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:23:35.254124 kubelet[2843]: I0314 00:23:35.253052 2843 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:23:35.302124 kubelet[2843]: E0314 00:23:35.302037 2843 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:23:35.306108 kubelet[2843]: I0314 00:23:35.306008 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:23:35.316336 kubelet[2843]: E0314 00:23:35.316289 2843 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:23:35.316544 kubelet[2843]: I0314 00:23:35.316521 2843 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:23:35.326993 kubelet[2843]: I0314 00:23:35.326961 2843 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:23:35.331072 kubelet[2843]: I0314 00:23:35.331006 2843 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:23:35.335629 kubelet[2843]: I0314 00:23:35.331063 2843 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:23:35.336646 kubelet[2843]: I0314 00:23:35.336574 2843 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:23:35.336646 kubelet[2843]: I0314 00:23:35.336642 2843 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:23:35.336845 kubelet[2843]: I0314 00:23:35.336818 2843 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:35.344583 kubelet[2843]: I0314 00:23:35.344522 2843 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:23:35.344583 kubelet[2843]: I0314 00:23:35.344584 2843 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:23:35.345244 kubelet[2843]: I0314 00:23:35.344620 2843 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:23:35.345244 kubelet[2843]: I0314 00:23:35.344645 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:23:35.367584 kubelet[2843]: E0314 00:23:35.367165 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-82&limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:35.368153 kubelet[2843]: I0314 00:23:35.368130 2843 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:23:35.368355 kubelet[2843]: E0314 00:23:35.368120 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:35.368915 kubelet[2843]: I0314 00:23:35.368892 2843 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:23:35.371759 kubelet[2843]: W0314 00:23:35.370047 2843 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:23:35.378315 kubelet[2843]: I0314 00:23:35.378274 2843 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:23:35.378442 kubelet[2843]: I0314 00:23:35.378352 2843 server.go:1289] "Started kubelet" Mar 14 00:23:35.378602 kubelet[2843]: I0314 00:23:35.378530 2843 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:23:35.380178 kubelet[2843]: I0314 00:23:35.379707 2843 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:23:35.382647 kubelet[2843]: I0314 00:23:35.382158 2843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:23:35.382740 kubelet[2843]: I0314 00:23:35.382673 2843 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:23:35.386120 kubelet[2843]: E0314 00:23:35.382833 2843 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.82:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.82:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-82.189c8d64575105f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-82,UID:ip-172-31-30-82,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-82,},FirstTimestamp:2026-03-14 00:23:35.378306551 +0000 UTC m=+1.251740053,LastTimestamp:2026-03-14 00:23:35.378306551 +0000 UTC m=+1.251740053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-82,}" Mar 14 00:23:35.387925 kubelet[2843]: I0314 00:23:35.387906 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:23:35.390445 kubelet[2843]: I0314 00:23:35.390426 2843 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:23:35.390670 kubelet[2843]: I0314 00:23:35.390652 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:23:35.393783 kubelet[2843]: I0314 00:23:35.393764 2843 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:23:35.393985 kubelet[2843]: I0314 00:23:35.393975 2843 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:23:35.394925 kubelet[2843]: E0314 00:23:35.394896 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:35.395702 kubelet[2843]: E0314 00:23:35.395679 2843 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-82\" not found" Mar 14 00:23:35.395924 kubelet[2843]: E0314 00:23:35.395901 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-82?timeout=10s\": dial tcp 172.31.30.82:6443: connect: connection refused" interval="200ms" Mar 14 00:23:35.400657 kubelet[2843]: I0314 00:23:35.400636 2843 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:23:35.401052 kubelet[2843]: I0314 00:23:35.401020 2843 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:23:35.403326 kubelet[2843]: I0314 00:23:35.403297 2843 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:23:35.405590 kubelet[2843]: E0314 00:23:35.405555 2843 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:23:35.408829 kubelet[2843]: I0314 00:23:35.408665 2843 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:23:35.432160 kubelet[2843]: I0314 00:23:35.432131 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:23:35.432160 kubelet[2843]: I0314 00:23:35.432163 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:23:35.432160 kubelet[2843]: I0314 00:23:35.432184 2843 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:35.435377 kubelet[2843]: I0314 00:23:35.435351 2843 policy_none.go:49] "None policy: Start" Mar 14 00:23:35.435485 kubelet[2843]: I0314 00:23:35.435384 2843 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:23:35.435485 kubelet[2843]: I0314 00:23:35.435402 2843 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:23:35.443480 kubelet[2843]: I0314 00:23:35.442894 2843 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:23:35.443480 kubelet[2843]: I0314 00:23:35.442939 2843 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:23:35.443480 kubelet[2843]: I0314 00:23:35.443027 2843 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:23:35.443480 kubelet[2843]: I0314 00:23:35.443039 2843 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:23:35.443480 kubelet[2843]: E0314 00:23:35.443139 2843 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:23:35.445037 kubelet[2843]: E0314 00:23:35.444699 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:35.452328 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:23:35.469609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:23:35.474558 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:23:35.486920 kubelet[2843]: E0314 00:23:35.485397 2843 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:23:35.486920 kubelet[2843]: I0314 00:23:35.485638 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:23:35.486920 kubelet[2843]: I0314 00:23:35.485661 2843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:23:35.486920 kubelet[2843]: I0314 00:23:35.486697 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:23:35.494734 kubelet[2843]: E0314 00:23:35.494712 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:23:35.494890 kubelet[2843]: E0314 00:23:35.494882 2843 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-82\" not found" Mar 14 00:23:35.559485 systemd[1]: Created slice kubepods-burstable-pod7757a779bdbc73047d8315ee4ec4bc40.slice - libcontainer container kubepods-burstable-pod7757a779bdbc73047d8315ee4ec4bc40.slice. Mar 14 00:23:35.580527 kubelet[2843]: E0314 00:23:35.580112 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:35.583761 systemd[1]: Created slice kubepods-burstable-pod51b8b8a6f54c523e6bf48e95f8902b50.slice - libcontainer container kubepods-burstable-pod51b8b8a6f54c523e6bf48e95f8902b50.slice. Mar 14 00:23:35.590117 kubelet[2843]: I0314 00:23:35.587954 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-82" Mar 14 00:23:35.590117 kubelet[2843]: E0314 00:23:35.589687 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.82:6443/api/v1/nodes\": dial tcp 172.31.30.82:6443: connect: connection refused" node="ip-172-31-30-82" Mar 14 00:23:35.594535 kubelet[2843]: E0314 00:23:35.594508 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:35.596478 kubelet[2843]: E0314 00:23:35.596442 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-82?timeout=10s\": dial tcp 172.31.30.82:6443: connect: connection refused" interval="400ms" Mar 14 00:23:35.599363 systemd[1]: Created slice kubepods-burstable-pod1d63ca2531157db9e35494cc7e51703b.slice - libcontainer container kubepods-burstable-pod1d63ca2531157db9e35494cc7e51703b.slice. Mar 14 00:23:35.601237 kubelet[2843]: E0314 00:23:35.601208 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:35.695224 kubelet[2843]: I0314 00:23:35.695173 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:35.695224 kubelet[2843]: I0314 00:23:35.695227 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:35.695516 kubelet[2843]: I0314 00:23:35.695271 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:35.695516 kubelet[2843]: I0314 00:23:35.695306 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:35.695516 kubelet[2843]: I0314 00:23:35.695330 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:35.695516 kubelet[2843]: I0314 00:23:35.695351 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d63ca2531157db9e35494cc7e51703b-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-82\" (UID: \"1d63ca2531157db9e35494cc7e51703b\") " pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:35.695516 kubelet[2843]: I0314 00:23:35.695373 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7757a779bdbc73047d8315ee4ec4bc40-ca-certs\") pod \"kube-apiserver-ip-172-31-30-82\" (UID: \"7757a779bdbc73047d8315ee4ec4bc40\") " pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:35.695734 kubelet[2843]: I0314 00:23:35.695394 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7757a779bdbc73047d8315ee4ec4bc40-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-82\" (UID: \"7757a779bdbc73047d8315ee4ec4bc40\") " pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:35.695734 kubelet[2843]: I0314 00:23:35.695416 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7757a779bdbc73047d8315ee4ec4bc40-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-82\" (UID: \"7757a779bdbc73047d8315ee4ec4bc40\") " pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:35.792189 kubelet[2843]: I0314 00:23:35.792147 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-82" Mar 14 00:23:35.792571 kubelet[2843]: E0314 00:23:35.792533 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.82:6443/api/v1/nodes\": dial tcp 172.31.30.82:6443: connect: connection refused" node="ip-172-31-30-82" Mar 14 00:23:35.883461 containerd[1970]: time="2026-03-14T00:23:35.882080934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-82,Uid:7757a779bdbc73047d8315ee4ec4bc40,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:35.896725 containerd[1970]: time="2026-03-14T00:23:35.896677795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-82,Uid:51b8b8a6f54c523e6bf48e95f8902b50,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:35.902794 containerd[1970]: time="2026-03-14T00:23:35.902727575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-82,Uid:1d63ca2531157db9e35494cc7e51703b,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:35.997872 kubelet[2843]: E0314 00:23:35.997821 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-82?timeout=10s\": dial tcp 172.31.30.82:6443: connect: connection refused" interval="800ms" Mar 14 00:23:36.194808 kubelet[2843]: I0314 00:23:36.194362 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-82" Mar 14 00:23:36.194808 kubelet[2843]: E0314 00:23:36.194715 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.82:6443/api/v1/nodes\": dial tcp 172.31.30.82:6443: connect: connection refused" node="ip-172-31-30-82" Mar 14 00:23:36.287940 kubelet[2843]: E0314 00:23:36.287892 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.30.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:23:36.341904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433613509.mount: Deactivated successfully. Mar 14 00:23:36.351871 containerd[1970]: time="2026-03-14T00:23:36.351805393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:36.352882 containerd[1970]: time="2026-03-14T00:23:36.352824809Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:23:36.355113 containerd[1970]: time="2026-03-14T00:23:36.354326837Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:36.356900 containerd[1970]: time="2026-03-14T00:23:36.356859826Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:36.358648 containerd[1970]: time="2026-03-14T00:23:36.358599306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:23:36.363709 containerd[1970]: time="2026-03-14T00:23:36.361845905Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:23:36.363709 containerd[1970]: time="2026-03-14T00:23:36.363043863Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:36.367503 containerd[1970]: time="2026-03-14T00:23:36.366764770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:23:36.369115 containerd[1970]: time="2026-03-14T00:23:36.368844800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.537283ms" Mar 14 00:23:36.370723 containerd[1970]: time="2026-03-14T00:23:36.370679352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 467.842955ms" Mar 14 00:23:36.377759 containerd[1970]: time="2026-03-14T00:23:36.377707512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.950023ms" Mar 14 00:23:36.479992 kubelet[2843]: E0314 00:23:36.479449 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.30.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-82&limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:23:36.603880 containerd[1970]: time="2026-03-14T00:23:36.601930958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:36.603880 containerd[1970]: time="2026-03-14T00:23:36.602235011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:36.603880 containerd[1970]: time="2026-03-14T00:23:36.602264383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.603880 containerd[1970]: time="2026-03-14T00:23:36.602367046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.616129 kubelet[2843]: E0314 00:23:36.616070 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.30.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:23:36.622478 containerd[1970]: time="2026-03-14T00:23:36.622382490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:36.622885 containerd[1970]: time="2026-03-14T00:23:36.622690762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:36.622885 containerd[1970]: time="2026-03-14T00:23:36.622711531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.623526 containerd[1970]: time="2026-03-14T00:23:36.623450452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.626457 containerd[1970]: time="2026-03-14T00:23:36.626222052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:36.626641 containerd[1970]: time="2026-03-14T00:23:36.626305479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:36.626641 containerd[1970]: time="2026-03-14T00:23:36.626323775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.626641 containerd[1970]: time="2026-03-14T00:23:36.626417854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:36.652364 systemd[1]: Started cri-containerd-47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133.scope - libcontainer container 47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133. Mar 14 00:23:36.675355 systemd[1]: Started cri-containerd-e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f.scope - libcontainer container e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f. Mar 14 00:23:36.681421 systemd[1]: Started cri-containerd-248e489402aaca68060cf40896d824f6f1b676e1e61d15148a2b924c66f07717.scope - libcontainer container 248e489402aaca68060cf40896d824f6f1b676e1e61d15148a2b924c66f07717. Mar 14 00:23:36.768404 containerd[1970]: time="2026-03-14T00:23:36.766989274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-82,Uid:1d63ca2531157db9e35494cc7e51703b,Namespace:kube-system,Attempt:0,} returns sandbox id \"47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133\"" Mar 14 00:23:36.768404 containerd[1970]: time="2026-03-14T00:23:36.767142566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-82,Uid:51b8b8a6f54c523e6bf48e95f8902b50,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f\"" Mar 14 00:23:36.779141 containerd[1970]: time="2026-03-14T00:23:36.777632901Z" level=info msg="CreateContainer within sandbox \"47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:23:36.781815 containerd[1970]: time="2026-03-14T00:23:36.781566815Z" level=info msg="CreateContainer within sandbox \"e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:23:36.799406 kubelet[2843]: E0314 00:23:36.799363 2843 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-82?timeout=10s\": dial tcp 172.31.30.82:6443: connect: connection refused" interval="1.6s" Mar 14 00:23:36.805069 containerd[1970]: time="2026-03-14T00:23:36.805017954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-82,Uid:7757a779bdbc73047d8315ee4ec4bc40,Namespace:kube-system,Attempt:0,} returns sandbox id \"248e489402aaca68060cf40896d824f6f1b676e1e61d15148a2b924c66f07717\"" Mar 14 00:23:36.811278 containerd[1970]: time="2026-03-14T00:23:36.811228646Z" level=info msg="CreateContainer within sandbox \"248e489402aaca68060cf40896d824f6f1b676e1e61d15148a2b924c66f07717\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:23:36.812608 containerd[1970]: time="2026-03-14T00:23:36.812397762Z" level=info msg="CreateContainer within sandbox \"47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804\"" Mar 14 00:23:36.813274 containerd[1970]: time="2026-03-14T00:23:36.813241876Z" level=info msg="StartContainer for \"dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804\"" Mar 14 00:23:36.815285 containerd[1970]: time="2026-03-14T00:23:36.814854547Z" level=info msg="CreateContainer within sandbox \"e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d\"" Mar 14 00:23:36.815413 containerd[1970]: time="2026-03-14T00:23:36.815368463Z" level=info msg="StartContainer for \"16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d\"" Mar 14 00:23:36.830433 containerd[1970]: time="2026-03-14T00:23:36.830207468Z" level=info msg="CreateContainer within sandbox \"248e489402aaca68060cf40896d824f6f1b676e1e61d15148a2b924c66f07717\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1dd5211487ac0afe33b8725741dda6a77f77986805c573152f54caeaedb63b19\"" Mar 14 00:23:36.831621 containerd[1970]: time="2026-03-14T00:23:36.831497204Z" level=info msg="StartContainer for \"1dd5211487ac0afe33b8725741dda6a77f77986805c573152f54caeaedb63b19\"" Mar 14 00:23:36.865324 systemd[1]: Started cri-containerd-16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d.scope - libcontainer container 16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d. Mar 14 00:23:36.877771 systemd[1]: Started cri-containerd-dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804.scope - libcontainer container dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804. Mar 14 00:23:36.890371 systemd[1]: Started cri-containerd-1dd5211487ac0afe33b8725741dda6a77f77986805c573152f54caeaedb63b19.scope - libcontainer container 1dd5211487ac0afe33b8725741dda6a77f77986805c573152f54caeaedb63b19. Mar 14 00:23:36.923922 kubelet[2843]: E0314 00:23:36.923848 2843 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.30.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:23:36.972111 containerd[1970]: time="2026-03-14T00:23:36.969884120Z" level=info msg="StartContainer for \"16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d\" returns successfully" Mar 14 00:23:36.972111 containerd[1970]: time="2026-03-14T00:23:36.971854753Z" level=info msg="StartContainer for \"1dd5211487ac0afe33b8725741dda6a77f77986805c573152f54caeaedb63b19\" returns successfully" Mar 14 00:23:36.999713 kubelet[2843]: I0314 00:23:36.999405 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-82" Mar 14 00:23:37.003301 kubelet[2843]: E0314 00:23:37.003218 2843 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.82:6443/api/v1/nodes\": dial tcp 172.31.30.82:6443: connect: connection refused" node="ip-172-31-30-82" Mar 14 00:23:37.009948 containerd[1970]: time="2026-03-14T00:23:37.009885812Z" level=info msg="StartContainer for \"dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804\" returns successfully" Mar 14 00:23:37.384427 kubelet[2843]: E0314 00:23:37.384379 2843 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.30.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:23:37.456663 kubelet[2843]: E0314 00:23:37.456228 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:37.459233 kubelet[2843]: E0314 00:23:37.458696 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:37.462189 kubelet[2843]: E0314 00:23:37.460607 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:38.465338 kubelet[2843]: E0314 00:23:38.463533 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:38.472294 kubelet[2843]: E0314 00:23:38.472266 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:38.605576 kubelet[2843]: I0314 00:23:38.605550 2843 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-82" Mar 14 00:23:39.902467 kubelet[2843]: E0314 00:23:39.902257 2843 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-82\" not found" node="ip-172-31-30-82" Mar 14 00:23:40.022661 kubelet[2843]: I0314 00:23:40.022361 2843 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-82" Mar 14 00:23:40.022661 kubelet[2843]: E0314 00:23:40.022518 2843 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-30-82\": node \"ip-172-31-30-82\" not found" Mar 14 00:23:40.073923 kubelet[2843]: E0314 00:23:40.073782 2843 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-82.189c8d64575105f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-82,UID:ip-172-31-30-82,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-82,},FirstTimestamp:2026-03-14 00:23:35.378306551 +0000 UTC m=+1.251740053,LastTimestamp:2026-03-14 00:23:35.378306551 +0000 UTC m=+1.251740053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-82,}" Mar 14 00:23:40.096316 kubelet[2843]: I0314 00:23:40.096275 2843 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:40.113290 kubelet[2843]: E0314 00:23:40.113250 2843 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:40.113290 kubelet[2843]: I0314 00:23:40.113288 2843 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:40.115560 kubelet[2843]: E0314 00:23:40.115520 2843 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:40.115560 kubelet[2843]: I0314 00:23:40.115555 2843 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:40.117688 kubelet[2843]: E0314 00:23:40.117646 2843 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:40.353311 kubelet[2843]: I0314 00:23:40.353259 2843 apiserver.go:52] "Watching apiserver" Mar 14 00:23:40.375591 kubelet[2843]: I0314 00:23:40.375560 2843 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:40.378708 kubelet[2843]: E0314 00:23:40.378666 2843 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:40.394386 kubelet[2843]: I0314 00:23:40.394337 2843 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:23:40.575195 kubelet[2843]: I0314 00:23:40.575158 2843 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:40.577510 kubelet[2843]: E0314 00:23:40.577434 2843 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:42.085052 systemd[1]: Reloading requested from client PID 3125 ('systemctl') (unit session-9.scope)... Mar 14 00:23:42.085079 systemd[1]: Reloading... Mar 14 00:23:42.221126 zram_generator::config[3171]: No configuration found. Mar 14 00:23:42.302050 update_engine[1960]: I20260314 00:23:42.301146 1960 update_attempter.cc:509] Updating boot flags... Mar 14 00:23:42.388125 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3228) Mar 14 00:23:42.416886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:23:42.618118 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3219) Mar 14 00:23:42.654184 systemd[1]: Reloading finished in 568 ms. Mar 14 00:23:42.785343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:42.817538 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:23:42.818008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:42.818201 systemd[1]: kubelet.service: Consumed 1.727s CPU time, 127.2M memory peak, 0B memory swap peak. Mar 14 00:23:42.828204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:23:43.161007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:23:43.175689 (kubelet)[3407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:23:43.255329 kubelet[3407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:43.255329 kubelet[3407]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:23:43.255329 kubelet[3407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:23:43.255329 kubelet[3407]: I0314 00:23:43.255144 3407 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:23:43.268554 kubelet[3407]: I0314 00:23:43.268334 3407 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:23:43.268554 kubelet[3407]: I0314 00:23:43.268386 3407 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:23:43.268832 kubelet[3407]: I0314 00:23:43.268737 3407 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:23:43.270684 kubelet[3407]: I0314 00:23:43.270635 3407 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:23:43.284485 sudo[3418]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:23:43.284932 sudo[3418]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:23:43.295438 kubelet[3407]: I0314 00:23:43.295237 3407 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:23:43.309431 kubelet[3407]: E0314 00:23:43.309383 3407 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:23:43.309431 kubelet[3407]: I0314 00:23:43.309429 3407 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:23:43.313621 kubelet[3407]: I0314 00:23:43.313571 3407 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:23:43.313934 kubelet[3407]: I0314 00:23:43.313890 3407 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:23:43.314326 kubelet[3407]: I0314 00:23:43.313938 3407 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:23:43.314326 kubelet[3407]: I0314 00:23:43.314183 3407 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:23:43.314326 kubelet[3407]: I0314 00:23:43.314198 3407 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:23:43.316669 kubelet[3407]: I0314 00:23:43.316157 3407 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:43.318272 kubelet[3407]: I0314 00:23:43.318250 3407 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:23:43.320482 kubelet[3407]: I0314 00:23:43.320128 3407 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:23:43.320482 kubelet[3407]: I0314 00:23:43.320181 3407 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:23:43.320482 kubelet[3407]: I0314 00:23:43.320200 3407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:23:43.326365 kubelet[3407]: I0314 00:23:43.325588 3407 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:23:43.330183 kubelet[3407]: I0314 00:23:43.330149 3407 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:23:43.363723 kubelet[3407]: I0314 00:23:43.363633 3407 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:23:43.363723 kubelet[3407]: I0314 00:23:43.363692 3407 server.go:1289] "Started kubelet" Mar 14 00:23:43.370116 kubelet[3407]: I0314 00:23:43.369650 3407 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:23:43.375080 kubelet[3407]: I0314 00:23:43.374997 3407 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:23:43.389340 kubelet[3407]: I0314 00:23:43.389259 3407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:23:43.390007 kubelet[3407]: I0314 00:23:43.389550 3407 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:23:43.399007 kubelet[3407]: I0314 00:23:43.398804 3407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:23:43.401410 kubelet[3407]: I0314 00:23:43.399262 3407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:23:43.407691 kubelet[3407]: E0314 00:23:43.407347 3407 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:23:43.410009 kubelet[3407]: I0314 00:23:43.408851 3407 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:23:43.410009 kubelet[3407]: I0314 00:23:43.409027 3407 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:23:43.415781 kubelet[3407]: I0314 00:23:43.413377 3407 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:23:43.429583 kubelet[3407]: I0314 00:23:43.428660 3407 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:23:43.429583 kubelet[3407]: I0314 00:23:43.428681 3407 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:23:43.429583 kubelet[3407]: I0314 00:23:43.428765 3407 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:23:43.441923 kubelet[3407]: I0314 00:23:43.441885 3407 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:23:43.443922 kubelet[3407]: I0314 00:23:43.443537 3407 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:23:43.443922 kubelet[3407]: I0314 00:23:43.443562 3407 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:23:43.443922 kubelet[3407]: I0314 00:23:43.443593 3407 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:23:43.443922 kubelet[3407]: I0314 00:23:43.443605 3407 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:23:43.443922 kubelet[3407]: E0314 00:23:43.443654 3407 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:23:43.506539 kubelet[3407]: I0314 00:23:43.506514 3407 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:23:43.506896 kubelet[3407]: I0314 00:23:43.506826 3407 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:23:43.506896 kubelet[3407]: I0314 00:23:43.506852 3407 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:23:43.507359 kubelet[3407]: I0314 00:23:43.507315 3407 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:23:43.507512 kubelet[3407]: I0314 00:23:43.507333 3407 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:23:43.507512 kubelet[3407]: I0314 00:23:43.507457 3407 policy_none.go:49] "None policy: Start" Mar 14 00:23:43.507512 kubelet[3407]: I0314 00:23:43.507471 3407 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:23:43.507512 kubelet[3407]: I0314 00:23:43.507485 3407 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:23:43.508004 kubelet[3407]: I0314 00:23:43.507842 3407 state_mem.go:75] "Updated machine memory state" Mar 14 00:23:43.514594 kubelet[3407]: E0314 00:23:43.513504 3407 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:23:43.514594 kubelet[3407]: I0314 00:23:43.513702 3407 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:23:43.514594 kubelet[3407]: I0314 00:23:43.513715 3407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:23:43.515678 kubelet[3407]: I0314 00:23:43.515658 3407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:23:43.527675 kubelet[3407]: E0314 00:23:43.527639 3407 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:23:43.547988 kubelet[3407]: I0314 00:23:43.547950 3407 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:43.550521 kubelet[3407]: I0314 00:23:43.549741 3407 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:43.550521 kubelet[3407]: I0314 00:23:43.550007 3407 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:43.617695 kubelet[3407]: I0314 00:23:43.617649 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:43.618032 kubelet[3407]: I0314 00:23:43.618011 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:43.618296 kubelet[3407]: I0314 00:23:43.618275 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:43.618927 kubelet[3407]: I0314 00:23:43.618437 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:43.618927 kubelet[3407]: I0314 00:23:43.618469 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7757a779bdbc73047d8315ee4ec4bc40-ca-certs\") pod \"kube-apiserver-ip-172-31-30-82\" (UID: \"7757a779bdbc73047d8315ee4ec4bc40\") " pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:43.619103 kubelet[3407]: I0314 00:23:43.618518 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7757a779bdbc73047d8315ee4ec4bc40-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-82\" (UID: \"7757a779bdbc73047d8315ee4ec4bc40\") " pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:43.619216 kubelet[3407]: I0314 00:23:43.619196 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51b8b8a6f54c523e6bf48e95f8902b50-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-82\" (UID: \"51b8b8a6f54c523e6bf48e95f8902b50\") " pod="kube-system/kube-controller-manager-ip-172-31-30-82" Mar 14 00:23:43.619458 kubelet[3407]: I0314 00:23:43.619376 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d63ca2531157db9e35494cc7e51703b-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-82\" (UID: \"1d63ca2531157db9e35494cc7e51703b\") " pod="kube-system/kube-scheduler-ip-172-31-30-82" Mar 14 00:23:43.619458 kubelet[3407]: I0314 00:23:43.619408 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7757a779bdbc73047d8315ee4ec4bc40-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-82\" (UID: \"7757a779bdbc73047d8315ee4ec4bc40\") " pod="kube-system/kube-apiserver-ip-172-31-30-82" Mar 14 00:23:43.622710 kubelet[3407]: I0314 00:23:43.621727 3407 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-82" Mar 14 00:23:43.631966 kubelet[3407]: I0314 00:23:43.631929 3407 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-82" Mar 14 00:23:43.632116 kubelet[3407]: I0314 00:23:43.632020 3407 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-82" Mar 14 00:23:44.084912 sudo[3418]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:44.324741 kubelet[3407]: I0314 00:23:44.323231 3407 apiserver.go:52] "Watching apiserver" Mar 14 00:23:44.379790 kubelet[3407]: I0314 00:23:44.379464 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-82" podStartSLOduration=1.379443939 podStartE2EDuration="1.379443939s" podCreationTimestamp="2026-03-14 00:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:44.36721756 +0000 UTC m=+1.181892205" watchObservedRunningTime="2026-03-14 00:23:44.379443939 +0000 UTC m=+1.194118566" Mar 14 00:23:44.392529 kubelet[3407]: I0314 00:23:44.391709 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-82" podStartSLOduration=1.391684707 podStartE2EDuration="1.391684707s" podCreationTimestamp="2026-03-14 00:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:44.380508631 +0000 UTC m=+1.195183268" watchObservedRunningTime="2026-03-14 00:23:44.391684707 +0000 UTC m=+1.206359349" Mar 14 00:23:44.409985 kubelet[3407]: I0314 00:23:44.409910 3407 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:23:44.411316 kubelet[3407]: I0314 00:23:44.411117 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-82" podStartSLOduration=1.411097091 podStartE2EDuration="1.411097091s" podCreationTimestamp="2026-03-14 00:23:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:44.393679749 +0000 UTC m=+1.208354396" watchObservedRunningTime="2026-03-14 00:23:44.411097091 +0000 UTC m=+1.225771737" Mar 14 00:23:45.761254 sudo[2327]: pam_unix(sudo:session): session closed for user root Mar 14 00:23:45.842278 sshd[2324]: pam_unix(sshd:session): session closed for user core Mar 14 00:23:45.846677 systemd[1]: sshd@8-172.31.30.82:22-68.220.241.50:36158.service: Deactivated successfully. Mar 14 00:23:45.850876 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:23:45.851132 systemd[1]: session-9.scope: Consumed 5.807s CPU time, 144.1M memory peak, 0B memory swap peak. Mar 14 00:23:45.852894 systemd-logind[1958]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:23:45.854433 systemd-logind[1958]: Removed session 9. Mar 14 00:23:48.532987 kubelet[3407]: I0314 00:23:48.532942 3407 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:23:48.536223 containerd[1970]: time="2026-03-14T00:23:48.536173381Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:23:48.536659 kubelet[3407]: I0314 00:23:48.536416 3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:23:49.590148 systemd[1]: Created slice kubepods-besteffort-pod79571ade_b202_4e0a_82cb_6a5bfb744c12.slice - libcontainer container kubepods-besteffort-pod79571ade_b202_4e0a_82cb_6a5bfb744c12.slice. Mar 14 00:23:49.612414 systemd[1]: Created slice kubepods-burstable-podb46fd587_6391_4eef_88a2_4c8495707809.slice - libcontainer container kubepods-burstable-podb46fd587_6391_4eef_88a2_4c8495707809.slice. Mar 14 00:23:49.663430 kubelet[3407]: I0314 00:23:49.663369 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79571ade-b202-4e0a-82cb-6a5bfb744c12-lib-modules\") pod \"kube-proxy-9d9lr\" (UID: \"79571ade-b202-4e0a-82cb-6a5bfb744c12\") " pod="kube-system/kube-proxy-9d9lr" Mar 14 00:23:49.664037 kubelet[3407]: I0314 00:23:49.664012 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh2k6\" (UniqueName: \"kubernetes.io/projected/79571ade-b202-4e0a-82cb-6a5bfb744c12-kube-api-access-fh2k6\") pod \"kube-proxy-9d9lr\" (UID: \"79571ade-b202-4e0a-82cb-6a5bfb744c12\") " pod="kube-system/kube-proxy-9d9lr" Mar 14 00:23:49.664260 kubelet[3407]: I0314 00:23:49.664222 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-cgroup\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664331 kubelet[3407]: I0314 00:23:49.664277 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b46fd587-6391-4eef-88a2-4c8495707809-clustermesh-secrets\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664331 kubelet[3407]: I0314 00:23:49.664305 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b46fd587-6391-4eef-88a2-4c8495707809-cilium-config-path\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664435 kubelet[3407]: I0314 00:23:49.664330 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-hubble-tls\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664435 kubelet[3407]: I0314 00:23:49.664358 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79571ade-b202-4e0a-82cb-6a5bfb744c12-kube-proxy\") pod \"kube-proxy-9d9lr\" (UID: \"79571ade-b202-4e0a-82cb-6a5bfb744c12\") " pod="kube-system/kube-proxy-9d9lr" Mar 14 00:23:49.664435 kubelet[3407]: I0314 00:23:49.664398 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-run\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664435 kubelet[3407]: I0314 00:23:49.664421 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cni-path\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664607 kubelet[3407]: I0314 00:23:49.664447 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-etc-cni-netd\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664607 kubelet[3407]: I0314 00:23:49.664473 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-lib-modules\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664607 kubelet[3407]: I0314 00:23:49.664506 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-net\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664607 kubelet[3407]: I0314 00:23:49.664531 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79571ade-b202-4e0a-82cb-6a5bfb744c12-xtables-lock\") pod \"kube-proxy-9d9lr\" (UID: \"79571ade-b202-4e0a-82cb-6a5bfb744c12\") " pod="kube-system/kube-proxy-9d9lr" Mar 14 00:23:49.664607 kubelet[3407]: I0314 00:23:49.664553 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-bpf-maps\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664607 kubelet[3407]: I0314 00:23:49.664575 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-hostproc\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664851 kubelet[3407]: I0314 00:23:49.664597 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-xtables-lock\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664851 kubelet[3407]: I0314 00:23:49.664620 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-kernel\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.664851 kubelet[3407]: I0314 00:23:49.664643 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szfzv\" (UniqueName: \"kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-kube-api-access-szfzv\") pod \"cilium-vltf6\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " pod="kube-system/cilium-vltf6" Mar 14 00:23:49.763718 systemd[1]: Created slice kubepods-besteffort-pod405527d2_65a6_4b41_b0eb_587369dcdd67.slice - libcontainer container kubepods-besteffort-pod405527d2_65a6_4b41_b0eb_587369dcdd67.slice. Mar 14 00:23:49.866433 kubelet[3407]: I0314 00:23:49.866281 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/405527d2-65a6-4b41-b0eb-587369dcdd67-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-q7z55\" (UID: \"405527d2-65a6-4b41-b0eb-587369dcdd67\") " pod="kube-system/cilium-operator-6c4d7847fc-q7z55" Mar 14 00:23:49.866704 kubelet[3407]: I0314 00:23:49.866620 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccvzj\" (UniqueName: \"kubernetes.io/projected/405527d2-65a6-4b41-b0eb-587369dcdd67-kube-api-access-ccvzj\") pod \"cilium-operator-6c4d7847fc-q7z55\" (UID: \"405527d2-65a6-4b41-b0eb-587369dcdd67\") " pod="kube-system/cilium-operator-6c4d7847fc-q7z55" Mar 14 00:23:49.899709 containerd[1970]: time="2026-03-14T00:23:49.899666644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9d9lr,Uid:79571ade-b202-4e0a-82cb-6a5bfb744c12,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:49.919358 containerd[1970]: time="2026-03-14T00:23:49.919273115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vltf6,Uid:b46fd587-6391-4eef-88a2-4c8495707809,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:49.934672 containerd[1970]: time="2026-03-14T00:23:49.932917166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:49.934672 containerd[1970]: time="2026-03-14T00:23:49.933044436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:49.934672 containerd[1970]: time="2026-03-14T00:23:49.933067883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:49.934672 containerd[1970]: time="2026-03-14T00:23:49.933246796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:49.961337 systemd[1]: Started cri-containerd-84220308857329ae5c8d1fab1bf7fea837724b324f3aea107f6057796622f86c.scope - libcontainer container 84220308857329ae5c8d1fab1bf7fea837724b324f3aea107f6057796622f86c. Mar 14 00:23:49.978788 containerd[1970]: time="2026-03-14T00:23:49.978558061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:49.978788 containerd[1970]: time="2026-03-14T00:23:49.978635467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:49.978788 containerd[1970]: time="2026-03-14T00:23:49.978660624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:49.981734 containerd[1970]: time="2026-03-14T00:23:49.980730828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:50.026756 systemd[1]: Started cri-containerd-7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6.scope - libcontainer container 7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6. Mar 14 00:23:50.028425 containerd[1970]: time="2026-03-14T00:23:50.027787017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9d9lr,Uid:79571ade-b202-4e0a-82cb-6a5bfb744c12,Namespace:kube-system,Attempt:0,} returns sandbox id \"84220308857329ae5c8d1fab1bf7fea837724b324f3aea107f6057796622f86c\"" Mar 14 00:23:50.038120 containerd[1970]: time="2026-03-14T00:23:50.038016169Z" level=info msg="CreateContainer within sandbox \"84220308857329ae5c8d1fab1bf7fea837724b324f3aea107f6057796622f86c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:23:50.073258 containerd[1970]: time="2026-03-14T00:23:50.073185238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vltf6,Uid:b46fd587-6391-4eef-88a2-4c8495707809,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\"" Mar 14 00:23:50.075744 containerd[1970]: time="2026-03-14T00:23:50.075548441Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:23:50.079782 containerd[1970]: time="2026-03-14T00:23:50.079747405Z" level=info msg="CreateContainer within sandbox \"84220308857329ae5c8d1fab1bf7fea837724b324f3aea107f6057796622f86c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38743feee31159793f413a59e8eacce750b6cb7f6bac96bec02de77214f2faae\"" Mar 14 00:23:50.081418 containerd[1970]: time="2026-03-14T00:23:50.081386281Z" level=info msg="StartContainer for \"38743feee31159793f413a59e8eacce750b6cb7f6bac96bec02de77214f2faae\"" Mar 14 00:23:50.082794 containerd[1970]: time="2026-03-14T00:23:50.081941839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q7z55,Uid:405527d2-65a6-4b41-b0eb-587369dcdd67,Namespace:kube-system,Attempt:0,}" Mar 14 00:23:50.127583 containerd[1970]: time="2026-03-14T00:23:50.127003493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:23:50.127583 containerd[1970]: time="2026-03-14T00:23:50.127142268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:23:50.127583 containerd[1970]: time="2026-03-14T00:23:50.127163070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:50.128732 containerd[1970]: time="2026-03-14T00:23:50.127404671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:23:50.129359 systemd[1]: Started cri-containerd-38743feee31159793f413a59e8eacce750b6cb7f6bac96bec02de77214f2faae.scope - libcontainer container 38743feee31159793f413a59e8eacce750b6cb7f6bac96bec02de77214f2faae. Mar 14 00:23:50.163399 systemd[1]: Started cri-containerd-5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006.scope - libcontainer container 5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006. Mar 14 00:23:50.203608 containerd[1970]: time="2026-03-14T00:23:50.203468287Z" level=info msg="StartContainer for \"38743feee31159793f413a59e8eacce750b6cb7f6bac96bec02de77214f2faae\" returns successfully" Mar 14 00:23:50.242497 containerd[1970]: time="2026-03-14T00:23:50.242442381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-q7z55,Uid:405527d2-65a6-4b41-b0eb-587369dcdd67,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\"" Mar 14 00:23:51.683764 kubelet[3407]: I0314 00:23:51.683692 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9d9lr" podStartSLOduration=2.6836710310000003 podStartE2EDuration="2.683671031s" podCreationTimestamp="2026-03-14 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:23:50.512413969 +0000 UTC m=+7.327088611" watchObservedRunningTime="2026-03-14 00:23:51.683671031 +0000 UTC m=+8.498345677" Mar 14 00:23:57.354320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524897022.mount: Deactivated successfully. Mar 14 00:24:00.231686 containerd[1970]: time="2026-03-14T00:24:00.231618891Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:00.240974 containerd[1970]: time="2026-03-14T00:24:00.240892392Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:24:00.245207 containerd[1970]: time="2026-03-14T00:24:00.243898737Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:00.247891 containerd[1970]: time="2026-03-14T00:24:00.247830771Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.172230595s" Mar 14 00:24:00.248288 containerd[1970]: time="2026-03-14T00:24:00.248082989Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:24:00.263469 containerd[1970]: time="2026-03-14T00:24:00.263430656Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:24:00.304154 containerd[1970]: time="2026-03-14T00:24:00.304064897Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:24:00.513623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041855510.mount: Deactivated successfully. Mar 14 00:24:00.613225 containerd[1970]: time="2026-03-14T00:24:00.613166991Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\"" Mar 14 00:24:00.615422 containerd[1970]: time="2026-03-14T00:24:00.614282690Z" level=info msg="StartContainer for \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\"" Mar 14 00:24:00.856417 systemd[1]: Started cri-containerd-ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae.scope - libcontainer container ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae. Mar 14 00:24:00.901131 containerd[1970]: time="2026-03-14T00:24:00.901017957Z" level=info msg="StartContainer for \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\" returns successfully" Mar 14 00:24:00.933891 systemd[1]: cri-containerd-ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae.scope: Deactivated successfully. Mar 14 00:24:01.145130 containerd[1970]: time="2026-03-14T00:24:01.099897934Z" level=info msg="shim disconnected" id=ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae namespace=k8s.io Mar 14 00:24:01.145130 containerd[1970]: time="2026-03-14T00:24:01.144131007Z" level=warning msg="cleaning up after shim disconnected" id=ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae namespace=k8s.io Mar 14 00:24:01.145130 containerd[1970]: time="2026-03-14T00:24:01.144153467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:01.498675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae-rootfs.mount: Deactivated successfully. Mar 14 00:24:01.587415 containerd[1970]: time="2026-03-14T00:24:01.587249902Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:24:01.678486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222592706.mount: Deactivated successfully. Mar 14 00:24:01.822738 containerd[1970]: time="2026-03-14T00:24:01.822685307Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\"" Mar 14 00:24:01.838277 containerd[1970]: time="2026-03-14T00:24:01.834942617Z" level=info msg="StartContainer for \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\"" Mar 14 00:24:02.278393 systemd[1]: Started cri-containerd-d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950.scope - libcontainer container d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950. Mar 14 00:24:02.514990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238260272.mount: Deactivated successfully. Mar 14 00:24:02.711527 containerd[1970]: time="2026-03-14T00:24:02.711460790Z" level=info msg="StartContainer for \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\" returns successfully" Mar 14 00:24:02.820200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:24:02.820618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:02.820726 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:24:02.875758 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:24:02.890641 systemd[1]: cri-containerd-d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950.scope: Deactivated successfully. Mar 14 00:24:03.142015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:24:03.205687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950-rootfs.mount: Deactivated successfully. Mar 14 00:24:03.288967 containerd[1970]: time="2026-03-14T00:24:03.288883651Z" level=info msg="shim disconnected" id=d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950 namespace=k8s.io Mar 14 00:24:03.288967 containerd[1970]: time="2026-03-14T00:24:03.288964878Z" level=warning msg="cleaning up after shim disconnected" id=d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950 namespace=k8s.io Mar 14 00:24:03.288967 containerd[1970]: time="2026-03-14T00:24:03.288976771Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:03.726826 containerd[1970]: time="2026-03-14T00:24:03.726743626Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:24:03.887700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834465954.mount: Deactivated successfully. Mar 14 00:24:03.927012 containerd[1970]: time="2026-03-14T00:24:03.926753755Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\"" Mar 14 00:24:03.929122 containerd[1970]: time="2026-03-14T00:24:03.927625224Z" level=info msg="StartContainer for \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\"" Mar 14 00:24:04.076418 systemd[1]: Started cri-containerd-b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736.scope - libcontainer container b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736. Mar 14 00:24:04.187233 containerd[1970]: time="2026-03-14T00:24:04.187179897Z" level=info msg="StartContainer for \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\" returns successfully" Mar 14 00:24:04.219023 systemd[1]: cri-containerd-b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736.scope: Deactivated successfully. Mar 14 00:24:04.393838 containerd[1970]: time="2026-03-14T00:24:04.381654939Z" level=info msg="shim disconnected" id=b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736 namespace=k8s.io Mar 14 00:24:04.393838 containerd[1970]: time="2026-03-14T00:24:04.381903416Z" level=warning msg="cleaning up after shim disconnected" id=b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736 namespace=k8s.io Mar 14 00:24:04.393838 containerd[1970]: time="2026-03-14T00:24:04.382042428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:04.706103 containerd[1970]: time="2026-03-14T00:24:04.705257026Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:24:04.747015 containerd[1970]: time="2026-03-14T00:24:04.746973895Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\"" Mar 14 00:24:04.751152 containerd[1970]: time="2026-03-14T00:24:04.750649668Z" level=info msg="StartContainer for \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\"" Mar 14 00:24:04.798737 systemd[1]: Started cri-containerd-7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf.scope - libcontainer container 7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf. Mar 14 00:24:04.837065 systemd[1]: cri-containerd-7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf.scope: Deactivated successfully. Mar 14 00:24:04.844324 containerd[1970]: time="2026-03-14T00:24:04.843921064Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46fd587_6391_4eef_88a2_4c8495707809.slice/cri-containerd-7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf.scope/memory.events\": no such file or directory" Mar 14 00:24:04.847848 containerd[1970]: time="2026-03-14T00:24:04.847594461Z" level=info msg="StartContainer for \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\" returns successfully" Mar 14 00:24:04.889624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736-rootfs.mount: Deactivated successfully. Mar 14 00:24:04.898664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf-rootfs.mount: Deactivated successfully. Mar 14 00:24:04.954020 containerd[1970]: time="2026-03-14T00:24:04.953868044Z" level=info msg="shim disconnected" id=7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf namespace=k8s.io Mar 14 00:24:04.954284 containerd[1970]: time="2026-03-14T00:24:04.954143542Z" level=warning msg="cleaning up after shim disconnected" id=7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf namespace=k8s.io Mar 14 00:24:04.954284 containerd[1970]: time="2026-03-14T00:24:04.954167077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:24:05.226007 containerd[1970]: time="2026-03-14T00:24:05.225966362Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:05.227825 containerd[1970]: time="2026-03-14T00:24:05.227752930Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:24:05.230071 containerd[1970]: time="2026-03-14T00:24:05.230001877Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:24:05.231957 containerd[1970]: time="2026-03-14T00:24:05.231794792Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.968128112s" Mar 14 00:24:05.231957 containerd[1970]: time="2026-03-14T00:24:05.231846735Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:24:05.239190 containerd[1970]: time="2026-03-14T00:24:05.239138569Z" level=info msg="CreateContainer within sandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:24:05.264368 containerd[1970]: time="2026-03-14T00:24:05.264310920Z" level=info msg="CreateContainer within sandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\"" Mar 14 00:24:05.266782 containerd[1970]: time="2026-03-14T00:24:05.265589366Z" level=info msg="StartContainer for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\"" Mar 14 00:24:05.309463 systemd[1]: Started cri-containerd-75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38.scope - libcontainer container 75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38. Mar 14 00:24:05.350215 containerd[1970]: time="2026-03-14T00:24:05.350160291Z" level=info msg="StartContainer for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" returns successfully" Mar 14 00:24:05.714576 containerd[1970]: time="2026-03-14T00:24:05.714539668Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:24:05.746166 containerd[1970]: time="2026-03-14T00:24:05.743916263Z" level=info msg="CreateContainer within sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\"" Mar 14 00:24:05.747161 containerd[1970]: time="2026-03-14T00:24:05.747118239Z" level=info msg="StartContainer for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\"" Mar 14 00:24:05.799236 kubelet[3407]: I0314 00:24:05.796570 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-q7z55" podStartSLOduration=1.806093856 podStartE2EDuration="16.794839406s" podCreationTimestamp="2026-03-14 00:23:49 +0000 UTC" firstStartedPulling="2026-03-14 00:23:50.244389949 +0000 UTC m=+7.059064574" lastFinishedPulling="2026-03-14 00:24:05.233135485 +0000 UTC m=+22.047810124" observedRunningTime="2026-03-14 00:24:05.717323897 +0000 UTC m=+22.531998543" watchObservedRunningTime="2026-03-14 00:24:05.794839406 +0000 UTC m=+22.609514052" Mar 14 00:24:05.810404 systemd[1]: Started cri-containerd-798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208.scope - libcontainer container 798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208. Mar 14 00:24:05.885805 systemd[1]: run-containerd-runc-k8s.io-75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38-runc.UpsPYB.mount: Deactivated successfully. Mar 14 00:24:05.905279 containerd[1970]: time="2026-03-14T00:24:05.905231061Z" level=info msg="StartContainer for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" returns successfully" Mar 14 00:24:06.390883 kubelet[3407]: I0314 00:24:06.390849 3407 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:24:06.580003 systemd[1]: Created slice kubepods-burstable-podfcd6143b_7b0d_4d64_8b15_d21944fa701e.slice - libcontainer container kubepods-burstable-podfcd6143b_7b0d_4d64_8b15_d21944fa701e.slice. Mar 14 00:24:06.589679 systemd[1]: Created slice kubepods-burstable-pod96c0570e_b223_4354_8460_e0b7e608a5bf.slice - libcontainer container kubepods-burstable-pod96c0570e_b223_4354_8460_e0b7e608a5bf.slice. Mar 14 00:24:06.635890 kubelet[3407]: I0314 00:24:06.634103 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96c0570e-b223-4354-8460-e0b7e608a5bf-config-volume\") pod \"coredns-674b8bbfcf-crdw6\" (UID: \"96c0570e-b223-4354-8460-e0b7e608a5bf\") " pod="kube-system/coredns-674b8bbfcf-crdw6" Mar 14 00:24:06.635890 kubelet[3407]: I0314 00:24:06.634161 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsskn\" (UniqueName: \"kubernetes.io/projected/96c0570e-b223-4354-8460-e0b7e608a5bf-kube-api-access-rsskn\") pod \"coredns-674b8bbfcf-crdw6\" (UID: \"96c0570e-b223-4354-8460-e0b7e608a5bf\") " pod="kube-system/coredns-674b8bbfcf-crdw6" Mar 14 00:24:06.635890 kubelet[3407]: I0314 00:24:06.634194 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qg7l\" (UniqueName: \"kubernetes.io/projected/fcd6143b-7b0d-4d64-8b15-d21944fa701e-kube-api-access-9qg7l\") pod \"coredns-674b8bbfcf-l25zk\" (UID: \"fcd6143b-7b0d-4d64-8b15-d21944fa701e\") " pod="kube-system/coredns-674b8bbfcf-l25zk" Mar 14 00:24:06.635890 kubelet[3407]: I0314 00:24:06.634216 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcd6143b-7b0d-4d64-8b15-d21944fa701e-config-volume\") pod \"coredns-674b8bbfcf-l25zk\" (UID: \"fcd6143b-7b0d-4d64-8b15-d21944fa701e\") " pod="kube-system/coredns-674b8bbfcf-l25zk" Mar 14 00:24:06.763225 kubelet[3407]: I0314 00:24:06.763050 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vltf6" podStartSLOduration=7.577575807 podStartE2EDuration="17.763027568s" podCreationTimestamp="2026-03-14 00:23:49 +0000 UTC" firstStartedPulling="2026-03-14 00:23:50.075184081 +0000 UTC m=+6.889858717" lastFinishedPulling="2026-03-14 00:24:00.260635834 +0000 UTC m=+17.075310478" observedRunningTime="2026-03-14 00:24:06.760267231 +0000 UTC m=+23.574941890" watchObservedRunningTime="2026-03-14 00:24:06.763027568 +0000 UTC m=+23.577702217" Mar 14 00:24:06.890317 containerd[1970]: time="2026-03-14T00:24:06.890142771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l25zk,Uid:fcd6143b-7b0d-4d64-8b15-d21944fa701e,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:06.897730 containerd[1970]: time="2026-03-14T00:24:06.897683144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-crdw6,Uid:96c0570e-b223-4354-8460-e0b7e608a5bf,Namespace:kube-system,Attempt:0,}" Mar 14 00:24:08.962686 systemd-networkd[1898]: cilium_host: Link UP Mar 14 00:24:08.969823 (udev-worker)[4225]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:08.975280 systemd-networkd[1898]: cilium_net: Link UP Mar 14 00:24:08.975290 systemd-networkd[1898]: cilium_net: Gained carrier Mar 14 00:24:08.975480 (udev-worker)[4260]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:08.975590 systemd-networkd[1898]: cilium_host: Gained carrier Mar 14 00:24:08.975934 systemd-networkd[1898]: cilium_host: Gained IPv6LL Mar 14 00:24:09.113685 systemd-networkd[1898]: cilium_net: Gained IPv6LL Mar 14 00:24:09.136600 systemd-networkd[1898]: cilium_vxlan: Link UP Mar 14 00:24:09.136609 systemd-networkd[1898]: cilium_vxlan: Gained carrier Mar 14 00:24:09.693142 kernel: NET: Registered PF_ALG protocol family Mar 14 00:24:10.480869 systemd-networkd[1898]: lxc_health: Link UP Mar 14 00:24:10.482503 systemd-networkd[1898]: lxc_health: Gained carrier Mar 14 00:24:10.569298 systemd-networkd[1898]: cilium_vxlan: Gained IPv6LL Mar 14 00:24:11.114828 systemd-networkd[1898]: lxce1786a1fe379: Link UP Mar 14 00:24:11.122233 kernel: eth0: renamed from tmpd2cdc Mar 14 00:24:11.131628 systemd-networkd[1898]: lxce1786a1fe379: Gained carrier Mar 14 00:24:11.135045 (udev-worker)[4279]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:24:11.138778 systemd-networkd[1898]: lxccad61aa4293a: Link UP Mar 14 00:24:11.155542 kernel: eth0: renamed from tmp1403c Mar 14 00:24:11.164563 systemd-networkd[1898]: lxccad61aa4293a: Gained carrier Mar 14 00:24:11.721433 systemd-networkd[1898]: lxc_health: Gained IPv6LL Mar 14 00:24:12.747261 systemd-networkd[1898]: lxccad61aa4293a: Gained IPv6LL Mar 14 00:24:12.937423 systemd-networkd[1898]: lxce1786a1fe379: Gained IPv6LL Mar 14 00:24:15.516139 containerd[1970]: time="2026-03-14T00:24:15.515483361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:15.516139 containerd[1970]: time="2026-03-14T00:24:15.515582080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:15.516139 containerd[1970]: time="2026-03-14T00:24:15.515607024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:15.516139 containerd[1970]: time="2026-03-14T00:24:15.515716792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:15.570305 systemd[1]: Started cri-containerd-d2cdc50738b6bff36b886b49ea7c0c5cc301f93bcd473f2c5bc53f603ea053e9.scope - libcontainer container d2cdc50738b6bff36b886b49ea7c0c5cc301f93bcd473f2c5bc53f603ea053e9. Mar 14 00:24:15.706522 containerd[1970]: time="2026-03-14T00:24:15.706435494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-l25zk,Uid:fcd6143b-7b0d-4d64-8b15-d21944fa701e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2cdc50738b6bff36b886b49ea7c0c5cc301f93bcd473f2c5bc53f603ea053e9\"" Mar 14 00:24:15.719222 containerd[1970]: time="2026-03-14T00:24:15.717923672Z" level=info msg="CreateContainer within sandbox \"d2cdc50738b6bff36b886b49ea7c0c5cc301f93bcd473f2c5bc53f603ea053e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:24:15.743737 containerd[1970]: time="2026-03-14T00:24:15.726643998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:24:15.743737 containerd[1970]: time="2026-03-14T00:24:15.726750954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:24:15.743737 containerd[1970]: time="2026-03-14T00:24:15.726787852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:15.743737 containerd[1970]: time="2026-03-14T00:24:15.726910407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:24:15.770782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652543103.mount: Deactivated successfully. Mar 14 00:24:15.777233 containerd[1970]: time="2026-03-14T00:24:15.777188261Z" level=info msg="CreateContainer within sandbox \"d2cdc50738b6bff36b886b49ea7c0c5cc301f93bcd473f2c5bc53f603ea053e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5b184b33df589fc90ef06700dd875fb5a3c5bf615f673a88b6d7162d946d002\"" Mar 14 00:24:15.781123 containerd[1970]: time="2026-03-14T00:24:15.780512234Z" level=info msg="StartContainer for \"f5b184b33df589fc90ef06700dd875fb5a3c5bf615f673a88b6d7162d946d002\"" Mar 14 00:24:15.799357 systemd[1]: Started cri-containerd-1403ccce27174924df8463ce278aa1e7c03f8fa877215220215af9c64cedd7d5.scope - libcontainer container 1403ccce27174924df8463ce278aa1e7c03f8fa877215220215af9c64cedd7d5. Mar 14 00:24:15.849632 systemd[1]: Started cri-containerd-f5b184b33df589fc90ef06700dd875fb5a3c5bf615f673a88b6d7162d946d002.scope - libcontainer container f5b184b33df589fc90ef06700dd875fb5a3c5bf615f673a88b6d7162d946d002. Mar 14 00:24:15.897455 containerd[1970]: time="2026-03-14T00:24:15.897064474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-crdw6,Uid:96c0570e-b223-4354-8460-e0b7e608a5bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1403ccce27174924df8463ce278aa1e7c03f8fa877215220215af9c64cedd7d5\"" Mar 14 00:24:15.909118 containerd[1970]: time="2026-03-14T00:24:15.908809643Z" level=info msg="CreateContainer within sandbox \"1403ccce27174924df8463ce278aa1e7c03f8fa877215220215af9c64cedd7d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:24:15.918826 containerd[1970]: time="2026-03-14T00:24:15.918780225Z" level=info msg="StartContainer for \"f5b184b33df589fc90ef06700dd875fb5a3c5bf615f673a88b6d7162d946d002\" returns successfully" Mar 14 00:24:15.925595 containerd[1970]: time="2026-03-14T00:24:15.925546628Z" level=info msg="CreateContainer within sandbox \"1403ccce27174924df8463ce278aa1e7c03f8fa877215220215af9c64cedd7d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6334c22e423a7f2ce9edaad8543687463f6e8dfe3122e9a4cd4288a2b5c66cac\"" Mar 14 00:24:15.926802 containerd[1970]: time="2026-03-14T00:24:15.926289569Z" level=info msg="StartContainer for \"6334c22e423a7f2ce9edaad8543687463f6e8dfe3122e9a4cd4288a2b5c66cac\"" Mar 14 00:24:15.962355 systemd[1]: Started cri-containerd-6334c22e423a7f2ce9edaad8543687463f6e8dfe3122e9a4cd4288a2b5c66cac.scope - libcontainer container 6334c22e423a7f2ce9edaad8543687463f6e8dfe3122e9a4cd4288a2b5c66cac. Mar 14 00:24:15.998083 containerd[1970]: time="2026-03-14T00:24:15.998038030Z" level=info msg="StartContainer for \"6334c22e423a7f2ce9edaad8543687463f6e8dfe3122e9a4cd4288a2b5c66cac\" returns successfully" Mar 14 00:24:16.523751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968129346.mount: Deactivated successfully. Mar 14 00:24:16.790161 kubelet[3407]: I0314 00:24:16.777047 3407 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:24:16.803740 kubelet[3407]: I0314 00:24:16.803657 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-l25zk" podStartSLOduration=27.803642502 podStartE2EDuration="27.803642502s" podCreationTimestamp="2026-03-14 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:16.801369934 +0000 UTC m=+33.616044579" watchObservedRunningTime="2026-03-14 00:24:16.803642502 +0000 UTC m=+33.618317147" Mar 14 00:24:16.844827 kubelet[3407]: I0314 00:24:16.844718 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-crdw6" podStartSLOduration=27.84469536 podStartE2EDuration="27.84469536s" podCreationTimestamp="2026-03-14 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:24:16.842270017 +0000 UTC m=+33.656944663" watchObservedRunningTime="2026-03-14 00:24:16.84469536 +0000 UTC m=+33.659370004" Mar 14 00:24:18.641924 ntpd[1951]: Listen normally on 8 cilium_host 192.168.0.45:123 Mar 14 00:24:18.642029 ntpd[1951]: Listen normally on 9 cilium_net [fe80::7c55:81ff:fef7:670f%4]:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 8 cilium_host 192.168.0.45:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 9 cilium_net [fe80::7c55:81ff:fef7:670f%4]:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 10 cilium_host [fe80::3c6e:48ff:fedf:54b9%5]:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 11 cilium_vxlan [fe80::3460:36ff:fe10:bef1%6]:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 12 lxc_health [fe80::b044:85ff:fe3f:acff%8]:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 13 lxce1786a1fe379 [fe80::b09b:72ff:fe5e:4b72%10]:123 Mar 14 00:24:18.642568 ntpd[1951]: 14 Mar 00:24:18 ntpd[1951]: Listen normally on 14 lxccad61aa4293a [fe80::8c6d:96ff:fe21:5a47%12]:123 Mar 14 00:24:18.642115 ntpd[1951]: Listen normally on 10 cilium_host [fe80::3c6e:48ff:fedf:54b9%5]:123 Mar 14 00:24:18.642174 ntpd[1951]: Listen normally on 11 cilium_vxlan [fe80::3460:36ff:fe10:bef1%6]:123 Mar 14 00:24:18.642223 ntpd[1951]: Listen normally on 12 lxc_health [fe80::b044:85ff:fe3f:acff%8]:123 Mar 14 00:24:18.642263 ntpd[1951]: Listen normally on 13 lxce1786a1fe379 [fe80::b09b:72ff:fe5e:4b72%10]:123 Mar 14 00:24:18.642300 ntpd[1951]: Listen normally on 14 lxccad61aa4293a [fe80::8c6d:96ff:fe21:5a47%12]:123 Mar 14 00:24:20.600927 systemd[1]: Started sshd@9-172.31.30.82:22-68.220.241.50:46214.service - OpenSSH per-connection server daemon (68.220.241.50:46214). Mar 14 00:24:21.426511 sshd[4803]: Accepted publickey for core from 68.220.241.50 port 46214 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:21.427766 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:21.434644 systemd-logind[1958]: New session 10 of user core. Mar 14 00:24:21.440405 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:24:22.479310 sshd[4803]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:22.484411 systemd-logind[1958]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:24:22.485770 systemd[1]: sshd@9-172.31.30.82:22-68.220.241.50:46214.service: Deactivated successfully. Mar 14 00:24:22.487849 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:24:22.489232 systemd-logind[1958]: Removed session 10. Mar 14 00:24:27.562966 systemd[1]: Started sshd@10-172.31.30.82:22-68.220.241.50:60212.service - OpenSSH per-connection server daemon (68.220.241.50:60212). Mar 14 00:24:28.054132 sshd[4819]: Accepted publickey for core from 68.220.241.50 port 60212 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:28.055663 sshd[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:28.062454 systemd-logind[1958]: New session 11 of user core. Mar 14 00:24:28.066356 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:24:28.503299 sshd[4819]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:28.508450 systemd[1]: sshd@10-172.31.30.82:22-68.220.241.50:60212.service: Deactivated successfully. Mar 14 00:24:28.511138 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:24:28.512149 systemd-logind[1958]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:24:28.513872 systemd-logind[1958]: Removed session 11. Mar 14 00:24:33.594503 systemd[1]: Started sshd@11-172.31.30.82:22-68.220.241.50:37678.service - OpenSSH per-connection server daemon (68.220.241.50:37678). Mar 14 00:24:34.093245 sshd[4833]: Accepted publickey for core from 68.220.241.50 port 37678 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:34.094879 sshd[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:34.100532 systemd-logind[1958]: New session 12 of user core. Mar 14 00:24:34.108364 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:24:34.525195 sshd[4833]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:34.536730 systemd-logind[1958]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:24:34.537748 systemd[1]: sshd@11-172.31.30.82:22-68.220.241.50:37678.service: Deactivated successfully. Mar 14 00:24:34.540323 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:24:34.541944 systemd-logind[1958]: Removed session 12. Mar 14 00:24:39.631708 systemd[1]: Started sshd@12-172.31.30.82:22-68.220.241.50:37684.service - OpenSSH per-connection server daemon (68.220.241.50:37684). Mar 14 00:24:40.167126 sshd[4847]: Accepted publickey for core from 68.220.241.50 port 37684 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:40.168199 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:40.174166 systemd-logind[1958]: New session 13 of user core. Mar 14 00:24:40.179339 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:24:40.639169 sshd[4847]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:40.644184 systemd[1]: sshd@12-172.31.30.82:22-68.220.241.50:37684.service: Deactivated successfully. Mar 14 00:24:40.644297 systemd-logind[1958]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:24:40.647580 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:24:40.648796 systemd-logind[1958]: Removed session 13. Mar 14 00:24:40.727501 systemd[1]: Started sshd@13-172.31.30.82:22-68.220.241.50:37692.service - OpenSSH per-connection server daemon (68.220.241.50:37692). Mar 14 00:24:41.210493 sshd[4861]: Accepted publickey for core from 68.220.241.50 port 37692 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:41.211195 sshd[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:41.216865 systemd-logind[1958]: New session 14 of user core. Mar 14 00:24:41.220475 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:24:41.689013 sshd[4861]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:41.693426 systemd[1]: sshd@13-172.31.30.82:22-68.220.241.50:37692.service: Deactivated successfully. Mar 14 00:24:41.695768 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:24:41.697386 systemd-logind[1958]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:24:41.699305 systemd-logind[1958]: Removed session 14. Mar 14 00:24:41.792486 systemd[1]: Started sshd@14-172.31.30.82:22-68.220.241.50:37694.service - OpenSSH per-connection server daemon (68.220.241.50:37694). Mar 14 00:24:42.311871 sshd[4873]: Accepted publickey for core from 68.220.241.50 port 37694 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:42.313593 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:42.319280 systemd-logind[1958]: New session 15 of user core. Mar 14 00:24:42.323306 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:24:42.755689 sshd[4873]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:42.760317 systemd[1]: sshd@14-172.31.30.82:22-68.220.241.50:37694.service: Deactivated successfully. Mar 14 00:24:42.763216 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:24:42.764530 systemd-logind[1958]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:24:42.766053 systemd-logind[1958]: Removed session 15. Mar 14 00:24:47.848452 systemd[1]: Started sshd@15-172.31.30.82:22-68.220.241.50:58190.service - OpenSSH per-connection server daemon (68.220.241.50:58190). Mar 14 00:24:48.366719 sshd[4888]: Accepted publickey for core from 68.220.241.50 port 58190 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:48.368456 sshd[4888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:48.373351 systemd-logind[1958]: New session 16 of user core. Mar 14 00:24:48.379327 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:24:48.805514 sshd[4888]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:48.810054 systemd[1]: sshd@15-172.31.30.82:22-68.220.241.50:58190.service: Deactivated successfully. Mar 14 00:24:48.812633 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:24:48.813890 systemd-logind[1958]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:24:48.815377 systemd-logind[1958]: Removed session 16. Mar 14 00:24:53.892648 systemd[1]: Started sshd@16-172.31.30.82:22-68.220.241.50:43694.service - OpenSSH per-connection server daemon (68.220.241.50:43694). Mar 14 00:24:54.373930 sshd[4902]: Accepted publickey for core from 68.220.241.50 port 43694 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:54.375635 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:54.380423 systemd-logind[1958]: New session 17 of user core. Mar 14 00:24:54.388403 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:24:54.792678 sshd[4902]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:54.796023 systemd[1]: sshd@16-172.31.30.82:22-68.220.241.50:43694.service: Deactivated successfully. Mar 14 00:24:54.798779 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:24:54.801612 systemd-logind[1958]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:24:54.803109 systemd-logind[1958]: Removed session 17. Mar 14 00:24:54.899494 systemd[1]: Started sshd@17-172.31.30.82:22-68.220.241.50:43706.service - OpenSSH per-connection server daemon (68.220.241.50:43706). Mar 14 00:24:55.419496 sshd[4914]: Accepted publickey for core from 68.220.241.50 port 43706 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:55.421383 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:55.426229 systemd-logind[1958]: New session 18 of user core. Mar 14 00:24:55.433337 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:24:56.285740 sshd[4914]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:56.290395 systemd[1]: sshd@17-172.31.30.82:22-68.220.241.50:43706.service: Deactivated successfully. Mar 14 00:24:56.292737 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:24:56.293963 systemd-logind[1958]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:24:56.295397 systemd-logind[1958]: Removed session 18. Mar 14 00:24:56.381482 systemd[1]: Started sshd@18-172.31.30.82:22-68.220.241.50:43718.service - OpenSSH per-connection server daemon (68.220.241.50:43718). Mar 14 00:24:56.914034 sshd[4925]: Accepted publickey for core from 68.220.241.50 port 43718 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:56.915723 sshd[4925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:56.921594 systemd-logind[1958]: New session 19 of user core. Mar 14 00:24:56.928578 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:24:57.868591 sshd[4925]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:57.875394 systemd[1]: sshd@18-172.31.30.82:22-68.220.241.50:43718.service: Deactivated successfully. Mar 14 00:24:57.878410 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:24:57.880589 systemd-logind[1958]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:24:57.881981 systemd-logind[1958]: Removed session 19. Mar 14 00:24:57.955508 systemd[1]: Started sshd@19-172.31.30.82:22-68.220.241.50:43726.service - OpenSSH per-connection server daemon (68.220.241.50:43726). Mar 14 00:24:58.445886 sshd[4943]: Accepted publickey for core from 68.220.241.50 port 43726 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:58.446586 sshd[4943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:58.456704 systemd-logind[1958]: New session 20 of user core. Mar 14 00:24:58.463400 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:24:59.015233 sshd[4943]: pam_unix(sshd:session): session closed for user core Mar 14 00:24:59.020219 systemd[1]: sshd@19-172.31.30.82:22-68.220.241.50:43726.service: Deactivated successfully. Mar 14 00:24:59.022832 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:24:59.023785 systemd-logind[1958]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:24:59.025805 systemd-logind[1958]: Removed session 20. Mar 14 00:24:59.114480 systemd[1]: Started sshd@20-172.31.30.82:22-68.220.241.50:43728.service - OpenSSH per-connection server daemon (68.220.241.50:43728). Mar 14 00:24:59.629140 sshd[4954]: Accepted publickey for core from 68.220.241.50 port 43728 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:24:59.630503 sshd[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:24:59.636288 systemd-logind[1958]: New session 21 of user core. Mar 14 00:24:59.643386 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:25:00.076894 sshd[4954]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:00.091995 systemd[1]: sshd@20-172.31.30.82:22-68.220.241.50:43728.service: Deactivated successfully. Mar 14 00:25:00.100495 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:25:00.102570 systemd-logind[1958]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:25:00.106268 systemd-logind[1958]: Removed session 21. Mar 14 00:25:05.197810 systemd[1]: Started sshd@21-172.31.30.82:22-68.220.241.50:37074.service - OpenSSH per-connection server daemon (68.220.241.50:37074). Mar 14 00:25:05.739692 sshd[4970]: Accepted publickey for core from 68.220.241.50 port 37074 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:05.741752 sshd[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:05.747983 systemd-logind[1958]: New session 22 of user core. Mar 14 00:25:05.758416 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:25:06.250493 sshd[4970]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:06.254438 systemd[1]: sshd@21-172.31.30.82:22-68.220.241.50:37074.service: Deactivated successfully. Mar 14 00:25:06.259655 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:25:06.261929 systemd-logind[1958]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:25:06.263275 systemd-logind[1958]: Removed session 22. Mar 14 00:25:11.351468 systemd[1]: Started sshd@22-172.31.30.82:22-68.220.241.50:37078.service - OpenSSH per-connection server daemon (68.220.241.50:37078). Mar 14 00:25:11.856931 sshd[4983]: Accepted publickey for core from 68.220.241.50 port 37078 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:11.858583 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:11.864463 systemd-logind[1958]: New session 23 of user core. Mar 14 00:25:11.874379 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:25:12.297742 sshd[4983]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:12.303623 systemd[1]: sshd@22-172.31.30.82:22-68.220.241.50:37078.service: Deactivated successfully. Mar 14 00:25:12.306475 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:25:12.308936 systemd-logind[1958]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:25:12.311241 systemd-logind[1958]: Removed session 23. Mar 14 00:25:12.393520 systemd[1]: Started sshd@23-172.31.30.82:22-68.220.241.50:44114.service - OpenSSH per-connection server daemon (68.220.241.50:44114). Mar 14 00:25:12.914310 sshd[4995]: Accepted publickey for core from 68.220.241.50 port 44114 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:12.914984 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:12.921197 systemd-logind[1958]: New session 24 of user core. Mar 14 00:25:12.932585 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:25:14.555480 containerd[1970]: time="2026-03-14T00:25:14.555430720Z" level=info msg="StopContainer for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" with timeout 30 (s)" Mar 14 00:25:14.560401 containerd[1970]: time="2026-03-14T00:25:14.559909044Z" level=info msg="Stop container \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" with signal terminated" Mar 14 00:25:14.604988 systemd[1]: cri-containerd-75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38.scope: Deactivated successfully. Mar 14 00:25:14.649631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38-rootfs.mount: Deactivated successfully. Mar 14 00:25:14.654381 containerd[1970]: time="2026-03-14T00:25:14.654336211Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:25:14.657791 containerd[1970]: time="2026-03-14T00:25:14.657749456Z" level=info msg="StopContainer for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" with timeout 2 (s)" Mar 14 00:25:14.659437 containerd[1970]: time="2026-03-14T00:25:14.658069185Z" level=info msg="Stop container \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" with signal terminated" Mar 14 00:25:14.668579 containerd[1970]: time="2026-03-14T00:25:14.667539814Z" level=info msg="shim disconnected" id=75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38 namespace=k8s.io Mar 14 00:25:14.668579 containerd[1970]: time="2026-03-14T00:25:14.667623026Z" level=warning msg="cleaning up after shim disconnected" id=75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38 namespace=k8s.io Mar 14 00:25:14.668579 containerd[1970]: time="2026-03-14T00:25:14.667635515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:14.676957 systemd-networkd[1898]: lxc_health: Link DOWN Mar 14 00:25:14.676968 systemd-networkd[1898]: lxc_health: Lost carrier Mar 14 00:25:14.696373 systemd[1]: cri-containerd-798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208.scope: Deactivated successfully. Mar 14 00:25:14.696688 systemd[1]: cri-containerd-798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208.scope: Consumed 8.317s CPU time. Mar 14 00:25:14.718417 containerd[1970]: time="2026-03-14T00:25:14.718277217Z" level=info msg="StopContainer for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" returns successfully" Mar 14 00:25:14.722505 containerd[1970]: time="2026-03-14T00:25:14.722444897Z" level=info msg="StopPodSandbox for \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\"" Mar 14 00:25:14.722505 containerd[1970]: time="2026-03-14T00:25:14.722507105Z" level=info msg="Container to stop \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:25:14.727248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006-shm.mount: Deactivated successfully. Mar 14 00:25:14.740482 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208-rootfs.mount: Deactivated successfully. Mar 14 00:25:14.744595 systemd[1]: cri-containerd-5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006.scope: Deactivated successfully. Mar 14 00:25:14.765551 containerd[1970]: time="2026-03-14T00:25:14.765455487Z" level=info msg="shim disconnected" id=798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208 namespace=k8s.io Mar 14 00:25:14.765551 containerd[1970]: time="2026-03-14T00:25:14.765542660Z" level=warning msg="cleaning up after shim disconnected" id=798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208 namespace=k8s.io Mar 14 00:25:14.765551 containerd[1970]: time="2026-03-14T00:25:14.765554192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:14.794048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006-rootfs.mount: Deactivated successfully. Mar 14 00:25:14.797581 containerd[1970]: time="2026-03-14T00:25:14.797495521Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:25:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:25:14.807531 containerd[1970]: time="2026-03-14T00:25:14.805396037Z" level=info msg="shim disconnected" id=5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006 namespace=k8s.io Mar 14 00:25:14.807531 containerd[1970]: time="2026-03-14T00:25:14.805460506Z" level=warning msg="cleaning up after shim disconnected" id=5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006 namespace=k8s.io Mar 14 00:25:14.807531 containerd[1970]: time="2026-03-14T00:25:14.805473211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:14.816674 containerd[1970]: time="2026-03-14T00:25:14.816624124Z" level=info msg="StopContainer for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" returns successfully" Mar 14 00:25:14.822609 containerd[1970]: time="2026-03-14T00:25:14.822553773Z" level=info msg="StopPodSandbox for \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\"" Mar 14 00:25:14.822765 containerd[1970]: time="2026-03-14T00:25:14.822641622Z" level=info msg="Container to stop \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:25:14.822765 containerd[1970]: time="2026-03-14T00:25:14.822661332Z" level=info msg="Container to stop \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:25:14.822765 containerd[1970]: time="2026-03-14T00:25:14.822682943Z" level=info msg="Container to stop \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:25:14.822765 containerd[1970]: time="2026-03-14T00:25:14.822697564Z" level=info msg="Container to stop \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:25:14.822765 containerd[1970]: time="2026-03-14T00:25:14.822711616Z" level=info msg="Container to stop \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:25:14.843380 systemd[1]: cri-containerd-7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6.scope: Deactivated successfully. Mar 14 00:25:14.863360 containerd[1970]: time="2026-03-14T00:25:14.861373272Z" level=info msg="TearDown network for sandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" successfully" Mar 14 00:25:14.863360 containerd[1970]: time="2026-03-14T00:25:14.861423252Z" level=info msg="StopPodSandbox for \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" returns successfully" Mar 14 00:25:14.893539 containerd[1970]: time="2026-03-14T00:25:14.893263601Z" level=info msg="shim disconnected" id=7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6 namespace=k8s.io Mar 14 00:25:14.893539 containerd[1970]: time="2026-03-14T00:25:14.893333783Z" level=warning msg="cleaning up after shim disconnected" id=7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6 namespace=k8s.io Mar 14 00:25:14.893539 containerd[1970]: time="2026-03-14T00:25:14.893346004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:14.913304 containerd[1970]: time="2026-03-14T00:25:14.913150142Z" level=info msg="TearDown network for sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" successfully" Mar 14 00:25:14.913304 containerd[1970]: time="2026-03-14T00:25:14.913192103Z" level=info msg="StopPodSandbox for \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" returns successfully" Mar 14 00:25:15.018171 kubelet[3407]: I0314 00:25:15.018071 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-xtables-lock\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018171 kubelet[3407]: I0314 00:25:15.018175 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-cgroup\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018938 kubelet[3407]: I0314 00:25:15.018221 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szfzv\" (UniqueName: \"kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-kube-api-access-szfzv\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018938 kubelet[3407]: I0314 00:25:15.018243 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cni-path\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018938 kubelet[3407]: I0314 00:25:15.018276 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-etc-cni-netd\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018938 kubelet[3407]: I0314 00:25:15.018299 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-lib-modules\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018938 kubelet[3407]: I0314 00:25:15.018328 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-net\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.018938 kubelet[3407]: I0314 00:25:15.018355 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-hostproc\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.019368 kubelet[3407]: I0314 00:25:15.018383 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-run\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.019368 kubelet[3407]: I0314 00:25:15.018409 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b46fd587-6391-4eef-88a2-4c8495707809-clustermesh-secrets\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.019368 kubelet[3407]: I0314 00:25:15.018436 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b46fd587-6391-4eef-88a2-4c8495707809-cilium-config-path\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.019368 kubelet[3407]: I0314 00:25:15.018462 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccvzj\" (UniqueName: \"kubernetes.io/projected/405527d2-65a6-4b41-b0eb-587369dcdd67-kube-api-access-ccvzj\") pod \"405527d2-65a6-4b41-b0eb-587369dcdd67\" (UID: \"405527d2-65a6-4b41-b0eb-587369dcdd67\") " Mar 14 00:25:15.019368 kubelet[3407]: I0314 00:25:15.018487 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-hubble-tls\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.019368 kubelet[3407]: I0314 00:25:15.018509 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-kernel\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.019686 kubelet[3407]: I0314 00:25:15.018534 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/405527d2-65a6-4b41-b0eb-587369dcdd67-cilium-config-path\") pod \"405527d2-65a6-4b41-b0eb-587369dcdd67\" (UID: \"405527d2-65a6-4b41-b0eb-587369dcdd67\") " Mar 14 00:25:15.019686 kubelet[3407]: I0314 00:25:15.018557 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-bpf-maps\") pod \"b46fd587-6391-4eef-88a2-4c8495707809\" (UID: \"b46fd587-6391-4eef-88a2-4c8495707809\") " Mar 14 00:25:15.037249 kubelet[3407]: I0314 00:25:15.036995 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.040214 kubelet[3407]: I0314 00:25:15.038986 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.040214 kubelet[3407]: I0314 00:25:15.039057 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.047162 kubelet[3407]: I0314 00:25:15.046008 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-kube-api-access-szfzv" (OuterVolumeSpecName: "kube-api-access-szfzv") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "kube-api-access-szfzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:25:15.047162 kubelet[3407]: I0314 00:25:15.046113 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cni-path" (OuterVolumeSpecName: "cni-path") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.047162 kubelet[3407]: I0314 00:25:15.046142 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.047162 kubelet[3407]: I0314 00:25:15.046164 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.047162 kubelet[3407]: I0314 00:25:15.046186 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.047516 kubelet[3407]: I0314 00:25:15.029004 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.047516 kubelet[3407]: I0314 00:25:15.028375 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-hostproc" (OuterVolumeSpecName: "hostproc") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.049158 kubelet[3407]: I0314 00:25:15.049040 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b46fd587-6391-4eef-88a2-4c8495707809-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:25:15.049372 kubelet[3407]: I0314 00:25:15.049348 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:25:15.054938 kubelet[3407]: I0314 00:25:15.054883 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/405527d2-65a6-4b41-b0eb-587369dcdd67-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "405527d2-65a6-4b41-b0eb-587369dcdd67" (UID: "405527d2-65a6-4b41-b0eb-587369dcdd67"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:25:15.056524 kubelet[3407]: I0314 00:25:15.056432 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405527d2-65a6-4b41-b0eb-587369dcdd67-kube-api-access-ccvzj" (OuterVolumeSpecName: "kube-api-access-ccvzj") pod "405527d2-65a6-4b41-b0eb-587369dcdd67" (UID: "405527d2-65a6-4b41-b0eb-587369dcdd67"). InnerVolumeSpecName "kube-api-access-ccvzj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:25:15.058306 kubelet[3407]: I0314 00:25:15.058218 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:25:15.064398 kubelet[3407]: I0314 00:25:15.059879 3407 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b46fd587-6391-4eef-88a2-4c8495707809-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b46fd587-6391-4eef-88a2-4c8495707809" (UID: "b46fd587-6391-4eef-88a2-4c8495707809"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:25:15.112200 systemd[1]: Removed slice kubepods-besteffort-pod405527d2_65a6_4b41_b0eb_587369dcdd67.slice - libcontainer container kubepods-besteffort-pod405527d2_65a6_4b41_b0eb_587369dcdd67.slice. Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124659 3407 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-bpf-maps\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124699 3407 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-xtables-lock\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124714 3407 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-cgroup\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124729 3407 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-szfzv\" (UniqueName: \"kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-kube-api-access-szfzv\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124744 3407 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cni-path\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124757 3407 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-etc-cni-netd\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124769 3407 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-lib-modules\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125469 kubelet[3407]: I0314 00:25:15.124782 3407 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-net\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124795 3407 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-hostproc\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124807 3407 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-cilium-run\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124820 3407 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b46fd587-6391-4eef-88a2-4c8495707809-clustermesh-secrets\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124832 3407 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b46fd587-6391-4eef-88a2-4c8495707809-cilium-config-path\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124845 3407 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ccvzj\" (UniqueName: \"kubernetes.io/projected/405527d2-65a6-4b41-b0eb-587369dcdd67-kube-api-access-ccvzj\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124858 3407 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b46fd587-6391-4eef-88a2-4c8495707809-hubble-tls\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124871 3407 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b46fd587-6391-4eef-88a2-4c8495707809-host-proc-sys-kernel\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.125899 kubelet[3407]: I0314 00:25:15.124886 3407 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/405527d2-65a6-4b41-b0eb-587369dcdd67-cilium-config-path\") on node \"ip-172-31-30-82\" DevicePath \"\"" Mar 14 00:25:15.136196 kubelet[3407]: I0314 00:25:15.136164 3407 scope.go:117] "RemoveContainer" containerID="75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38" Mar 14 00:25:15.154139 systemd[1]: Removed slice kubepods-burstable-podb46fd587_6391_4eef_88a2_4c8495707809.slice - libcontainer container kubepods-burstable-podb46fd587_6391_4eef_88a2_4c8495707809.slice. Mar 14 00:25:15.154322 systemd[1]: kubepods-burstable-podb46fd587_6391_4eef_88a2_4c8495707809.slice: Consumed 8.435s CPU time. Mar 14 00:25:15.171130 containerd[1970]: time="2026-03-14T00:25:15.170520551Z" level=info msg="RemoveContainer for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\"" Mar 14 00:25:15.181589 containerd[1970]: time="2026-03-14T00:25:15.181205569Z" level=info msg="RemoveContainer for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" returns successfully" Mar 14 00:25:15.182218 kubelet[3407]: I0314 00:25:15.182193 3407 scope.go:117] "RemoveContainer" containerID="75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38" Mar 14 00:25:15.206283 containerd[1970]: time="2026-03-14T00:25:15.191128826Z" level=error msg="ContainerStatus for \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\": not found" Mar 14 00:25:15.225584 kubelet[3407]: E0314 00:25:15.225524 3407 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\": not found" containerID="75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38" Mar 14 00:25:15.245473 kubelet[3407]: I0314 00:25:15.225606 3407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38"} err="failed to get container status \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\": rpc error: code = NotFound desc = an error occurred when try to find container \"75d77585118bc50c6086371ce2b088b48065430da18c3c50a48450b498f2ba38\": not found" Mar 14 00:25:15.245473 kubelet[3407]: I0314 00:25:15.245379 3407 scope.go:117] "RemoveContainer" containerID="798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208" Mar 14 00:25:15.247441 containerd[1970]: time="2026-03-14T00:25:15.247402895Z" level=info msg="RemoveContainer for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\"" Mar 14 00:25:15.253149 containerd[1970]: time="2026-03-14T00:25:15.253077789Z" level=info msg="RemoveContainer for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" returns successfully" Mar 14 00:25:15.253604 kubelet[3407]: I0314 00:25:15.253440 3407 scope.go:117] "RemoveContainer" containerID="7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf" Mar 14 00:25:15.254814 containerd[1970]: time="2026-03-14T00:25:15.254764817Z" level=info msg="RemoveContainer for \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\"" Mar 14 00:25:15.261079 containerd[1970]: time="2026-03-14T00:25:15.261009827Z" level=info msg="RemoveContainer for \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\" returns successfully" Mar 14 00:25:15.261377 kubelet[3407]: I0314 00:25:15.261348 3407 scope.go:117] "RemoveContainer" containerID="b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736" Mar 14 00:25:15.265219 containerd[1970]: time="2026-03-14T00:25:15.264874090Z" level=info msg="RemoveContainer for \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\"" Mar 14 00:25:15.270233 containerd[1970]: time="2026-03-14T00:25:15.270191833Z" level=info msg="RemoveContainer for \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\" returns successfully" Mar 14 00:25:15.270458 kubelet[3407]: I0314 00:25:15.270420 3407 scope.go:117] "RemoveContainer" containerID="d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950" Mar 14 00:25:15.272230 containerd[1970]: time="2026-03-14T00:25:15.271864934Z" level=info msg="RemoveContainer for \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\"" Mar 14 00:25:15.277556 containerd[1970]: time="2026-03-14T00:25:15.277499255Z" level=info msg="RemoveContainer for \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\" returns successfully" Mar 14 00:25:15.277824 kubelet[3407]: I0314 00:25:15.277787 3407 scope.go:117] "RemoveContainer" containerID="ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae" Mar 14 00:25:15.279221 containerd[1970]: time="2026-03-14T00:25:15.279187039Z" level=info msg="RemoveContainer for \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\"" Mar 14 00:25:15.284652 containerd[1970]: time="2026-03-14T00:25:15.284607436Z" level=info msg="RemoveContainer for \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\" returns successfully" Mar 14 00:25:15.285006 kubelet[3407]: I0314 00:25:15.284967 3407 scope.go:117] "RemoveContainer" containerID="798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208" Mar 14 00:25:15.285343 containerd[1970]: time="2026-03-14T00:25:15.285290562Z" level=error msg="ContainerStatus for \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\": not found" Mar 14 00:25:15.285522 kubelet[3407]: E0314 00:25:15.285492 3407 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\": not found" containerID="798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208" Mar 14 00:25:15.285693 kubelet[3407]: I0314 00:25:15.285530 3407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208"} err="failed to get container status \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\": rpc error: code = NotFound desc = an error occurred when try to find container \"798135cc7a7e318bf59e16baf5507d277f4ca5d94bead63340247cd2e61bb208\": not found" Mar 14 00:25:15.285693 kubelet[3407]: I0314 00:25:15.285566 3407 scope.go:117] "RemoveContainer" containerID="7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf" Mar 14 00:25:15.285924 containerd[1970]: time="2026-03-14T00:25:15.285874380Z" level=error msg="ContainerStatus for \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\": not found" Mar 14 00:25:15.286105 kubelet[3407]: E0314 00:25:15.286056 3407 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\": not found" containerID="7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf" Mar 14 00:25:15.286185 kubelet[3407]: I0314 00:25:15.286120 3407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf"} err="failed to get container status \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c7608e2552e7804fcd3714c279236fa0acd0ab94203703678fef6984eb634cf\": not found" Mar 14 00:25:15.286185 kubelet[3407]: I0314 00:25:15.286146 3407 scope.go:117] "RemoveContainer" containerID="b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736" Mar 14 00:25:15.286353 containerd[1970]: time="2026-03-14T00:25:15.286315488Z" level=error msg="ContainerStatus for \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\": not found" Mar 14 00:25:15.286471 kubelet[3407]: E0314 00:25:15.286430 3407 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\": not found" containerID="b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736" Mar 14 00:25:15.286524 kubelet[3407]: I0314 00:25:15.286472 3407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736"} err="failed to get container status \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\": rpc error: code = NotFound desc = an error occurred when try to find container \"b5bd44a864a8a76e06fa62baab72090912ae1f32fbe376199165e4599537d736\": not found" Mar 14 00:25:15.286524 kubelet[3407]: I0314 00:25:15.286495 3407 scope.go:117] "RemoveContainer" containerID="d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950" Mar 14 00:25:15.286870 containerd[1970]: time="2026-03-14T00:25:15.286829739Z" level=error msg="ContainerStatus for \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\": not found" Mar 14 00:25:15.287035 kubelet[3407]: E0314 00:25:15.287004 3407 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\": not found" containerID="d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950" Mar 14 00:25:15.287119 kubelet[3407]: I0314 00:25:15.287036 3407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950"} err="failed to get container status \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6724c7d99f015deddfedd9ee227bdf9e7ec6b71f4c2299fccd62d751698a950\": not found" Mar 14 00:25:15.287119 kubelet[3407]: I0314 00:25:15.287055 3407 scope.go:117] "RemoveContainer" containerID="ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae" Mar 14 00:25:15.287278 containerd[1970]: time="2026-03-14T00:25:15.287237862Z" level=error msg="ContainerStatus for \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\": not found" Mar 14 00:25:15.287388 kubelet[3407]: E0314 00:25:15.287362 3407 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\": not found" containerID="ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae" Mar 14 00:25:15.287489 kubelet[3407]: I0314 00:25:15.287393 3407 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae"} err="failed to get container status \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee6a45a89bad2c4fac347af6ea5d685e9b8f9aeecedcf34702b0c96c1c3069ae\": not found" Mar 14 00:25:15.447606 kubelet[3407]: I0314 00:25:15.447479 3407 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="405527d2-65a6-4b41-b0eb-587369dcdd67" path="/var/lib/kubelet/pods/405527d2-65a6-4b41-b0eb-587369dcdd67/volumes" Mar 14 00:25:15.448275 kubelet[3407]: I0314 00:25:15.448242 3407 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b46fd587-6391-4eef-88a2-4c8495707809" path="/var/lib/kubelet/pods/b46fd587-6391-4eef-88a2-4c8495707809/volumes" Mar 14 00:25:15.603881 systemd[1]: var-lib-kubelet-pods-405527d2\x2d65a6\x2d4b41\x2db0eb\x2d587369dcdd67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccvzj.mount: Deactivated successfully. Mar 14 00:25:15.604019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6-rootfs.mount: Deactivated successfully. Mar 14 00:25:15.604299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6-shm.mount: Deactivated successfully. Mar 14 00:25:15.604392 systemd[1]: var-lib-kubelet-pods-b46fd587\x2d6391\x2d4eef\x2d88a2\x2d4c8495707809-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszfzv.mount: Deactivated successfully. Mar 14 00:25:15.604476 systemd[1]: var-lib-kubelet-pods-b46fd587\x2d6391\x2d4eef\x2d88a2\x2d4c8495707809-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:25:15.604539 systemd[1]: var-lib-kubelet-pods-b46fd587\x2d6391\x2d4eef\x2d88a2\x2d4c8495707809-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:25:16.548355 sshd[4995]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:16.554247 systemd-logind[1958]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:25:16.555867 systemd[1]: sshd@23-172.31.30.82:22-68.220.241.50:44114.service: Deactivated successfully. Mar 14 00:25:16.558632 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:25:16.560233 systemd-logind[1958]: Removed session 24. Mar 14 00:25:16.652623 systemd[1]: Started sshd@24-172.31.30.82:22-68.220.241.50:44124.service - OpenSSH per-connection server daemon (68.220.241.50:44124). Mar 14 00:25:17.180820 sshd[5156]: Accepted publickey for core from 68.220.241.50 port 44124 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:17.182452 sshd[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:17.188275 systemd-logind[1958]: New session 25 of user core. Mar 14 00:25:17.199366 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:25:17.646457 ntpd[1951]: Deleting interface #12 lxc_health, fe80::b044:85ff:fe3f:acff%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Mar 14 00:25:17.646907 ntpd[1951]: 14 Mar 00:25:17 ntpd[1951]: Deleting interface #12 lxc_health, fe80::b044:85ff:fe3f:acff%8#123, interface stats: received=0, sent=0, dropped=0, active_time=59 secs Mar 14 00:25:18.567610 kubelet[3407]: E0314 00:25:18.567554 3407 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:25:18.821210 systemd[1]: Created slice kubepods-burstable-pod87dd6170_08b9_481a_b514_4b8e0f5d2fb1.slice - libcontainer container kubepods-burstable-pod87dd6170_08b9_481a_b514_4b8e0f5d2fb1.slice. Mar 14 00:25:18.857067 sshd[5156]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:18.861015 systemd-logind[1958]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:25:18.864170 systemd[1]: sshd@24-172.31.30.82:22-68.220.241.50:44124.service: Deactivated successfully. Mar 14 00:25:18.870362 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:25:18.870586 systemd[1]: session-25.scope: Consumed 1.202s CPU time. Mar 14 00:25:18.873254 systemd-logind[1958]: Removed session 25. Mar 14 00:25:18.942571 systemd[1]: Started sshd@25-172.31.30.82:22-68.220.241.50:44130.service - OpenSSH per-connection server daemon (68.220.241.50:44130). Mar 14 00:25:18.948316 kubelet[3407]: I0314 00:25:18.947697 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-lib-modules\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948316 kubelet[3407]: I0314 00:25:18.947744 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-cilium-ipsec-secrets\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948316 kubelet[3407]: I0314 00:25:18.947775 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-host-proc-sys-kernel\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948316 kubelet[3407]: I0314 00:25:18.947804 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-cni-path\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948316 kubelet[3407]: I0314 00:25:18.947827 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-etc-cni-netd\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948316 kubelet[3407]: I0314 00:25:18.947853 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-clustermesh-secrets\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948675 kubelet[3407]: I0314 00:25:18.947877 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsn7x\" (UniqueName: \"kubernetes.io/projected/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-kube-api-access-vsn7x\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948675 kubelet[3407]: I0314 00:25:18.947902 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-bpf-maps\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948675 kubelet[3407]: I0314 00:25:18.947924 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-cilium-config-path\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948675 kubelet[3407]: I0314 00:25:18.947948 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-hubble-tls\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948675 kubelet[3407]: I0314 00:25:18.948062 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-cilium-cgroup\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948675 kubelet[3407]: I0314 00:25:18.948115 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-xtables-lock\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948941 kubelet[3407]: I0314 00:25:18.948139 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-host-proc-sys-net\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948941 kubelet[3407]: I0314 00:25:18.948162 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-hostproc\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:18.948941 kubelet[3407]: I0314 00:25:18.948189 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/87dd6170-08b9-481a-b514-4b8e0f5d2fb1-cilium-run\") pod \"cilium-lrvpf\" (UID: \"87dd6170-08b9-481a-b514-4b8e0f5d2fb1\") " pod="kube-system/cilium-lrvpf" Mar 14 00:25:19.135631 containerd[1970]: time="2026-03-14T00:25:19.135488300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrvpf,Uid:87dd6170-08b9-481a-b514-4b8e0f5d2fb1,Namespace:kube-system,Attempt:0,}" Mar 14 00:25:19.186759 containerd[1970]: time="2026-03-14T00:25:19.186401924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:25:19.186759 containerd[1970]: time="2026-03-14T00:25:19.186504375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:25:19.187491 containerd[1970]: time="2026-03-14T00:25:19.187338899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:19.187639 containerd[1970]: time="2026-03-14T00:25:19.187476350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:25:19.217478 systemd[1]: Started cri-containerd-b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68.scope - libcontainer container b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68. Mar 14 00:25:19.248311 containerd[1970]: time="2026-03-14T00:25:19.248231310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrvpf,Uid:87dd6170-08b9-481a-b514-4b8e0f5d2fb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\"" Mar 14 00:25:19.259544 containerd[1970]: time="2026-03-14T00:25:19.258749182Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:25:19.283474 containerd[1970]: time="2026-03-14T00:25:19.283420159Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a\"" Mar 14 00:25:19.285165 containerd[1970]: time="2026-03-14T00:25:19.284207992Z" level=info msg="StartContainer for \"124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a\"" Mar 14 00:25:19.316562 systemd[1]: Started cri-containerd-124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a.scope - libcontainer container 124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a. Mar 14 00:25:19.353538 containerd[1970]: time="2026-03-14T00:25:19.353484633Z" level=info msg="StartContainer for \"124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a\" returns successfully" Mar 14 00:25:19.372552 systemd[1]: cri-containerd-124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a.scope: Deactivated successfully. Mar 14 00:25:19.430101 containerd[1970]: time="2026-03-14T00:25:19.429221941Z" level=info msg="shim disconnected" id=124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a namespace=k8s.io Mar 14 00:25:19.430101 containerd[1970]: time="2026-03-14T00:25:19.429298682Z" level=warning msg="cleaning up after shim disconnected" id=124ad867fd6f4e6573b3ca4bd614858337d5440397fbe5774aeed47fee69510a namespace=k8s.io Mar 14 00:25:19.430101 containerd[1970]: time="2026-03-14T00:25:19.429310915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:19.430444 sshd[5168]: Accepted publickey for core from 68.220.241.50 port 44130 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:19.434658 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:19.445360 systemd-logind[1958]: New session 26 of user core. Mar 14 00:25:19.452129 containerd[1970]: time="2026-03-14T00:25:19.450981129Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:25:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:25:19.451301 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:25:19.776378 sshd[5168]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:19.781325 systemd[1]: sshd@25-172.31.30.82:22-68.220.241.50:44130.service: Deactivated successfully. Mar 14 00:25:19.783922 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:25:19.785205 systemd-logind[1958]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:25:19.786604 systemd-logind[1958]: Removed session 26. Mar 14 00:25:19.870112 systemd[1]: Started sshd@26-172.31.30.82:22-68.220.241.50:44132.service - OpenSSH per-connection server daemon (68.220.241.50:44132). Mar 14 00:25:20.164981 containerd[1970]: time="2026-03-14T00:25:20.164868974Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:25:20.190781 containerd[1970]: time="2026-03-14T00:25:20.188474030Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da\"" Mar 14 00:25:20.193281 containerd[1970]: time="2026-03-14T00:25:20.191142458Z" level=info msg="StartContainer for \"90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da\"" Mar 14 00:25:20.243403 systemd[1]: Started cri-containerd-90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da.scope - libcontainer container 90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da. Mar 14 00:25:20.282466 containerd[1970]: time="2026-03-14T00:25:20.282417996Z" level=info msg="StartContainer for \"90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da\" returns successfully" Mar 14 00:25:20.294872 systemd[1]: cri-containerd-90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da.scope: Deactivated successfully. Mar 14 00:25:20.345739 containerd[1970]: time="2026-03-14T00:25:20.345653737Z" level=info msg="shim disconnected" id=90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da namespace=k8s.io Mar 14 00:25:20.345739 containerd[1970]: time="2026-03-14T00:25:20.345735854Z" level=warning msg="cleaning up after shim disconnected" id=90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da namespace=k8s.io Mar 14 00:25:20.345739 containerd[1970]: time="2026-03-14T00:25:20.345747661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:20.354120 sshd[5286]: Accepted publickey for core from 68.220.241.50 port 44132 ssh2: RSA SHA256:TceU6OEhln+Uy1Zsn8ZIbJdvrJBh/V63f4/ylJLNRDE Mar 14 00:25:20.356521 sshd[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:25:20.364231 containerd[1970]: time="2026-03-14T00:25:20.363137848Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:25:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:25:20.365364 systemd-logind[1958]: New session 27 of user core. Mar 14 00:25:20.371320 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:25:20.444652 kubelet[3407]: E0314 00:25:20.444400 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-crdw6" podUID="96c0570e-b223-4354-8460-e0b7e608a5bf" Mar 14 00:25:20.444652 kubelet[3407]: E0314 00:25:20.444515 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-l25zk" podUID="fcd6143b-7b0d-4d64-8b15-d21944fa701e" Mar 14 00:25:21.059154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90c2869b352d90e4f25574c87e9194e0f90d3d62c57b3aa5bf84972b434859da-rootfs.mount: Deactivated successfully. Mar 14 00:25:21.170901 containerd[1970]: time="2026-03-14T00:25:21.170704694Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:25:21.211763 containerd[1970]: time="2026-03-14T00:25:21.211711237Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32\"" Mar 14 00:25:21.214139 containerd[1970]: time="2026-03-14T00:25:21.212548144Z" level=info msg="StartContainer for \"c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32\"" Mar 14 00:25:21.255733 systemd[1]: run-containerd-runc-k8s.io-c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32-runc.rjs5Hi.mount: Deactivated successfully. Mar 14 00:25:21.265548 systemd[1]: Started cri-containerd-c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32.scope - libcontainer container c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32. Mar 14 00:25:21.305497 containerd[1970]: time="2026-03-14T00:25:21.305303033Z" level=info msg="StartContainer for \"c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32\" returns successfully" Mar 14 00:25:21.313168 systemd[1]: cri-containerd-c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32.scope: Deactivated successfully. Mar 14 00:25:21.365841 containerd[1970]: time="2026-03-14T00:25:21.365751767Z" level=info msg="shim disconnected" id=c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32 namespace=k8s.io Mar 14 00:25:21.365841 containerd[1970]: time="2026-03-14T00:25:21.365838537Z" level=warning msg="cleaning up after shim disconnected" id=c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32 namespace=k8s.io Mar 14 00:25:21.365841 containerd[1970]: time="2026-03-14T00:25:21.365849492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:22.059382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9b2a008b69a5d24867591306d44b1e24f0cc5db36c6d2e8303c19f865f83b32-rootfs.mount: Deactivated successfully. Mar 14 00:25:22.178012 containerd[1970]: time="2026-03-14T00:25:22.177964684Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:25:22.203178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255945519.mount: Deactivated successfully. Mar 14 00:25:22.206194 containerd[1970]: time="2026-03-14T00:25:22.206143511Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8\"" Mar 14 00:25:22.207077 containerd[1970]: time="2026-03-14T00:25:22.207038342Z" level=info msg="StartContainer for \"50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8\"" Mar 14 00:25:22.251365 systemd[1]: Started cri-containerd-50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8.scope - libcontainer container 50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8. Mar 14 00:25:22.282991 systemd[1]: cri-containerd-50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8.scope: Deactivated successfully. Mar 14 00:25:22.284729 containerd[1970]: time="2026-03-14T00:25:22.284460070Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87dd6170_08b9_481a_b514_4b8e0f5d2fb1.slice/cri-containerd-50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8.scope/memory.events\": no such file or directory" Mar 14 00:25:22.289155 containerd[1970]: time="2026-03-14T00:25:22.288849553Z" level=info msg="StartContainer for \"50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8\" returns successfully" Mar 14 00:25:22.325411 containerd[1970]: time="2026-03-14T00:25:22.324977920Z" level=info msg="shim disconnected" id=50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8 namespace=k8s.io Mar 14 00:25:22.325411 containerd[1970]: time="2026-03-14T00:25:22.325040577Z" level=warning msg="cleaning up after shim disconnected" id=50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8 namespace=k8s.io Mar 14 00:25:22.325411 containerd[1970]: time="2026-03-14T00:25:22.325051568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:25:22.347042 containerd[1970]: time="2026-03-14T00:25:22.346981955Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:25:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:25:22.445162 kubelet[3407]: E0314 00:25:22.445075 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-crdw6" podUID="96c0570e-b223-4354-8460-e0b7e608a5bf" Mar 14 00:25:22.445960 kubelet[3407]: E0314 00:25:22.445880 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-l25zk" podUID="fcd6143b-7b0d-4d64-8b15-d21944fa701e" Mar 14 00:25:23.059403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50e15a220ab273a8d52dba3475714837c2d9b43fdc7ca8d9c4b18e97458d21e8-rootfs.mount: Deactivated successfully. Mar 14 00:25:23.183844 containerd[1970]: time="2026-03-14T00:25:23.183705590Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:25:23.222757 containerd[1970]: time="2026-03-14T00:25:23.222703184Z" level=info msg="CreateContainer within sandbox \"b3fe65fd38b0e75d66bf5935a6f4df15765b78f7aa4baa398184ec3decbb7f68\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"46baa3c8bc54572edf739d22e260285b96290ebc85c3ef7198863810a794a526\"" Mar 14 00:25:23.224267 containerd[1970]: time="2026-03-14T00:25:23.223396232Z" level=info msg="StartContainer for \"46baa3c8bc54572edf739d22e260285b96290ebc85c3ef7198863810a794a526\"" Mar 14 00:25:23.269379 systemd[1]: Started cri-containerd-46baa3c8bc54572edf739d22e260285b96290ebc85c3ef7198863810a794a526.scope - libcontainer container 46baa3c8bc54572edf739d22e260285b96290ebc85c3ef7198863810a794a526. Mar 14 00:25:23.313178 containerd[1970]: time="2026-03-14T00:25:23.312261427Z" level=info msg="StartContainer for \"46baa3c8bc54572edf739d22e260285b96290ebc85c3ef7198863810a794a526\" returns successfully" Mar 14 00:25:24.014145 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:25:27.046202 systemd-networkd[1898]: lxc_health: Link UP Mar 14 00:25:27.051771 systemd-networkd[1898]: lxc_health: Gained carrier Mar 14 00:25:27.054666 (udev-worker)[6028]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:25:27.171761 kubelet[3407]: I0314 00:25:27.171693 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lrvpf" podStartSLOduration=9.171671505 podStartE2EDuration="9.171671505s" podCreationTimestamp="2026-03-14 00:25:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:25:24.215494021 +0000 UTC m=+101.030168664" watchObservedRunningTime="2026-03-14 00:25:27.171671505 +0000 UTC m=+103.986346150" Mar 14 00:25:27.337719 systemd[1]: run-containerd-runc-k8s.io-46baa3c8bc54572edf739d22e260285b96290ebc85c3ef7198863810a794a526-runc.6BNyOv.mount: Deactivated successfully. Mar 14 00:25:28.457362 systemd-networkd[1898]: lxc_health: Gained IPv6LL Mar 14 00:25:30.641743 ntpd[1951]: Listen normally on 15 lxc_health [fe80::cc8d:f2ff:fef7:c381%14]:123 Mar 14 00:25:30.642304 ntpd[1951]: 14 Mar 00:25:30 ntpd[1951]: Listen normally on 15 lxc_health [fe80::cc8d:f2ff:fef7:c381%14]:123 Mar 14 00:25:34.783247 sshd[5286]: pam_unix(sshd:session): session closed for user core Mar 14 00:25:34.788194 systemd[1]: sshd@26-172.31.30.82:22-68.220.241.50:44132.service: Deactivated successfully. Mar 14 00:25:34.790775 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:25:34.792218 systemd-logind[1958]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:25:34.793428 systemd-logind[1958]: Removed session 27. Mar 14 00:25:43.410888 containerd[1970]: time="2026-03-14T00:25:43.410813056Z" level=info msg="StopPodSandbox for \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\"" Mar 14 00:25:43.411941 containerd[1970]: time="2026-03-14T00:25:43.410931993Z" level=info msg="TearDown network for sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" successfully" Mar 14 00:25:43.411941 containerd[1970]: time="2026-03-14T00:25:43.410948573Z" level=info msg="StopPodSandbox for \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" returns successfully" Mar 14 00:25:43.411941 containerd[1970]: time="2026-03-14T00:25:43.411505807Z" level=info msg="RemovePodSandbox for \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\"" Mar 14 00:25:43.411941 containerd[1970]: time="2026-03-14T00:25:43.411688509Z" level=info msg="Forcibly stopping sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\"" Mar 14 00:25:43.411941 containerd[1970]: time="2026-03-14T00:25:43.411774496Z" level=info msg="TearDown network for sandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" successfully" Mar 14 00:25:43.417537 containerd[1970]: time="2026-03-14T00:25:43.417474607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:43.417808 containerd[1970]: time="2026-03-14T00:25:43.417566041Z" level=info msg="RemovePodSandbox \"7e923b090b07d4f9342ce6f5fa2197faa1425c2570ca215d25caf75735d769e6\" returns successfully" Mar 14 00:25:43.418430 containerd[1970]: time="2026-03-14T00:25:43.418202995Z" level=info msg="StopPodSandbox for \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\"" Mar 14 00:25:43.418430 containerd[1970]: time="2026-03-14T00:25:43.418317640Z" level=info msg="TearDown network for sandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" successfully" Mar 14 00:25:43.418430 containerd[1970]: time="2026-03-14T00:25:43.418330242Z" level=info msg="StopPodSandbox for \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" returns successfully" Mar 14 00:25:43.418767 containerd[1970]: time="2026-03-14T00:25:43.418739569Z" level=info msg="RemovePodSandbox for \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\"" Mar 14 00:25:43.418847 containerd[1970]: time="2026-03-14T00:25:43.418774476Z" level=info msg="Forcibly stopping sandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\"" Mar 14 00:25:43.418894 containerd[1970]: time="2026-03-14T00:25:43.418855242Z" level=info msg="TearDown network for sandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" successfully" Mar 14 00:25:43.425328 containerd[1970]: time="2026-03-14T00:25:43.425262286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:25:43.425534 containerd[1970]: time="2026-03-14T00:25:43.425345179Z" level=info msg="RemovePodSandbox \"5a63dcfbe3c993ab7a97fd8810d8c50506048a4f5ca9e63f251ee003ee639006\" returns successfully" Mar 14 00:26:04.430736 systemd[1]: cri-containerd-16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d.scope: Deactivated successfully. Mar 14 00:26:04.431451 systemd[1]: cri-containerd-16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d.scope: Consumed 4.164s CPU time, 20.8M memory peak, 0B memory swap peak. Mar 14 00:26:04.463124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d-rootfs.mount: Deactivated successfully. Mar 14 00:26:04.479506 containerd[1970]: time="2026-03-14T00:26:04.479390773Z" level=info msg="shim disconnected" id=16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d namespace=k8s.io Mar 14 00:26:04.479506 containerd[1970]: time="2026-03-14T00:26:04.479479102Z" level=warning msg="cleaning up after shim disconnected" id=16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d namespace=k8s.io Mar 14 00:26:04.479506 containerd[1970]: time="2026-03-14T00:26:04.479491722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:04.496975 containerd[1970]: time="2026-03-14T00:26:04.496920461Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:26:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:26:05.365818 kubelet[3407]: I0314 00:26:05.365705 3407 scope.go:117] "RemoveContainer" containerID="16ee28a6ac08fff501ca9aef97e1b246d5ae633251bb6fc2f7caf604cc6fac1d" Mar 14 00:26:05.369966 containerd[1970]: time="2026-03-14T00:26:05.369923461Z" level=info msg="CreateContainer within sandbox \"e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:26:05.394005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1518311606.mount: Deactivated successfully. Mar 14 00:26:05.398246 containerd[1970]: time="2026-03-14T00:26:05.398190779Z" level=info msg="CreateContainer within sandbox \"e0c9a5d5821931483e959f2e8451a251b325c4a7e3c7956822d4f08765093a1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b8d6b6e1d73f4f2945ad74179259035532f5363d56fd2bf9fe2655e1c8d5311d\"" Mar 14 00:26:05.398852 containerd[1970]: time="2026-03-14T00:26:05.398818651Z" level=info msg="StartContainer for \"b8d6b6e1d73f4f2945ad74179259035532f5363d56fd2bf9fe2655e1c8d5311d\"" Mar 14 00:26:05.441362 systemd[1]: Started cri-containerd-b8d6b6e1d73f4f2945ad74179259035532f5363d56fd2bf9fe2655e1c8d5311d.scope - libcontainer container b8d6b6e1d73f4f2945ad74179259035532f5363d56fd2bf9fe2655e1c8d5311d. Mar 14 00:26:05.500367 containerd[1970]: time="2026-03-14T00:26:05.500294129Z" level=info msg="StartContainer for \"b8d6b6e1d73f4f2945ad74179259035532f5363d56fd2bf9fe2655e1c8d5311d\" returns successfully" Mar 14 00:26:06.827682 kubelet[3407]: E0314 00:26:06.827619 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:26:09.627373 systemd[1]: cri-containerd-dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804.scope: Deactivated successfully. Mar 14 00:26:09.627699 systemd[1]: cri-containerd-dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804.scope: Consumed 2.914s CPU time, 16.1M memory peak, 0B memory swap peak. Mar 14 00:26:09.657986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804-rootfs.mount: Deactivated successfully. Mar 14 00:26:09.681063 containerd[1970]: time="2026-03-14T00:26:09.680960100Z" level=info msg="shim disconnected" id=dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804 namespace=k8s.io Mar 14 00:26:09.681063 containerd[1970]: time="2026-03-14T00:26:09.681058336Z" level=warning msg="cleaning up after shim disconnected" id=dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804 namespace=k8s.io Mar 14 00:26:09.681871 containerd[1970]: time="2026-03-14T00:26:09.681072782Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:26:10.380109 kubelet[3407]: I0314 00:26:10.380051 3407 scope.go:117] "RemoveContainer" containerID="dfc6cceace99c86b37915f5c295b244184a04f8933ec512f980ad8ca12ccf804" Mar 14 00:26:10.382532 containerd[1970]: time="2026-03-14T00:26:10.382493372Z" level=info msg="CreateContainer within sandbox \"47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:26:10.410200 containerd[1970]: time="2026-03-14T00:26:10.410138800Z" level=info msg="CreateContainer within sandbox \"47640d277dd39fc2dc72b1cbf89c1cf0d87ae5da1e1664a6e56516a1a5135133\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0e4386185f4ce637fd74b214cd76a1b329f1019be4e58db83ea7dd40b73fec08\"" Mar 14 00:26:10.410989 containerd[1970]: time="2026-03-14T00:26:10.410950699Z" level=info msg="StartContainer for \"0e4386185f4ce637fd74b214cd76a1b329f1019be4e58db83ea7dd40b73fec08\"" Mar 14 00:26:10.454356 systemd[1]: Started cri-containerd-0e4386185f4ce637fd74b214cd76a1b329f1019be4e58db83ea7dd40b73fec08.scope - libcontainer container 0e4386185f4ce637fd74b214cd76a1b329f1019be4e58db83ea7dd40b73fec08. Mar 14 00:26:10.507341 containerd[1970]: time="2026-03-14T00:26:10.507176727Z" level=info msg="StartContainer for \"0e4386185f4ce637fd74b214cd76a1b329f1019be4e58db83ea7dd40b73fec08\" returns successfully" Mar 14 00:26:16.829149 kubelet[3407]: E0314 00:26:16.828728 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"