Mar 7 02:00:47.077648 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 02:00:47.077693 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:00:47.077715 kernel: BIOS-provided physical RAM map: Mar 7 02:00:47.077725 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 02:00:47.077736 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 02:00:47.077745 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 02:00:47.077757 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 02:00:47.077767 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 02:00:47.077777 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 02:00:47.077793 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 02:00:47.077804 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 02:00:47.077813 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 02:00:47.077859 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 02:00:47.077871 kernel: NX (Execute Disable) protection: active Mar 7 02:00:47.077883 kernel: APIC: Static calls initialized Mar 7 02:00:47.077930 kernel: SMBIOS 2.8 present. Mar 7 02:00:47.077943 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 02:00:47.077955 kernel: Hypervisor detected: KVM Mar 7 02:00:47.077966 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 02:00:47.077977 kernel: kvm-clock: using sched offset of 29112720268 cycles Mar 7 02:00:47.077989 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 02:00:47.078001 kernel: tsc: Detected 2445.426 MHz processor Mar 7 02:00:47.078012 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 02:00:47.078024 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 02:00:47.078041 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 02:00:47.078053 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 02:00:47.078065 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 02:00:47.078076 kernel: Using GB pages for direct mapping Mar 7 02:00:47.078087 kernel: ACPI: Early table checksum verification disabled Mar 7 02:00:47.078097 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 02:00:47.078109 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078120 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078131 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078148 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 02:00:47.078160 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078171 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078182 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078272 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 02:00:47.078288 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 02:00:47.078300 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 02:00:47.078320 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 02:00:47.078337 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 02:00:47.078349 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 02:00:47.078361 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 02:00:47.078373 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 02:00:47.078385 kernel: No NUMA configuration found Mar 7 02:00:47.078589 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 02:00:47.078612 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 02:00:47.078624 kernel: Zone ranges: Mar 7 02:00:47.078637 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 02:00:47.078649 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 02:00:47.078660 kernel: Normal empty Mar 7 02:00:47.078672 kernel: Movable zone start for each node Mar 7 02:00:47.078684 kernel: Early memory node ranges Mar 7 02:00:47.078695 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 02:00:47.078707 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 02:00:47.078725 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 02:00:47.078737 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 02:00:47.078779 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 02:00:47.078794 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 02:00:47.078806 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 02:00:47.078818 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 02:00:47.078830 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 02:00:47.078842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 02:00:47.078854 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 02:00:47.078871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 02:00:47.078883 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 02:00:47.078896 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 02:00:47.078908 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 02:00:47.078919 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 02:00:47.078931 kernel: TSC deadline timer available Mar 7 02:00:47.078943 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 02:00:47.078955 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 02:00:47.078966 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 02:00:47.079016 kernel: kvm-guest: setup PV sched yield Mar 7 02:00:47.079031 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 02:00:47.079042 kernel: Booting paravirtualized kernel on KVM Mar 7 02:00:47.079055 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 02:00:47.079067 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 02:00:47.079079 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 02:00:47.079090 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 02:00:47.079102 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 02:00:47.079113 kernel: kvm-guest: PV spinlocks enabled Mar 7 02:00:47.079131 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 02:00:47.079144 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:00:47.079158 kernel: random: crng init done Mar 7 02:00:47.079169 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 02:00:47.079181 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 02:00:47.079395 kernel: Fallback order for Node 0: 0 Mar 7 02:00:47.079413 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 02:00:47.079424 kernel: Policy zone: DMA32 Mar 7 02:00:47.079435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 02:00:47.079453 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 7 02:00:47.079464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 02:00:47.079476 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 02:00:47.079487 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 02:00:47.079498 kernel: Dynamic Preempt: voluntary Mar 7 02:00:47.079510 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 02:00:47.079524 kernel: rcu: RCU event tracing is enabled. Mar 7 02:00:47.079536 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 02:00:47.079553 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 02:00:47.079563 kernel: Rude variant of Tasks RCU enabled. Mar 7 02:00:47.079573 kernel: Tracing variant of Tasks RCU enabled. Mar 7 02:00:47.079584 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 02:00:47.079595 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 02:00:47.079644 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 02:00:47.079657 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 02:00:47.079668 kernel: Console: colour VGA+ 80x25 Mar 7 02:00:47.079678 kernel: printk: console [ttyS0] enabled Mar 7 02:00:47.079688 kernel: ACPI: Core revision 20230628 Mar 7 02:00:47.079705 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 02:00:47.079716 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 02:00:47.079726 kernel: x2apic enabled Mar 7 02:00:47.079738 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 02:00:47.079749 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 02:00:47.079761 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 02:00:47.079773 kernel: kvm-guest: setup PV IPIs Mar 7 02:00:47.079784 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 02:00:47.079814 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 02:00:47.079826 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 02:00:47.079838 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 02:00:47.079853 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 02:00:47.079864 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 02:00:47.079876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 02:00:47.079888 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 02:00:47.079901 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 02:00:47.079918 kernel: Speculative Store Bypass: Vulnerable Mar 7 02:00:47.079930 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 02:00:47.079983 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 02:00:47.079998 kernel: active return thunk: srso_alias_return_thunk Mar 7 02:00:47.080010 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 02:00:47.080023 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 02:00:47.080035 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 02:00:47.080048 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 02:00:47.080066 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 02:00:47.080079 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 02:00:47.080091 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 02:00:47.080103 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 02:00:47.080116 kernel: Freeing SMP alternatives memory: 32K Mar 7 02:00:47.080128 kernel: pid_max: default: 32768 minimum: 301 Mar 7 02:00:47.080139 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 02:00:47.080151 kernel: landlock: Up and running. Mar 7 02:00:47.080162 kernel: SELinux: Initializing. Mar 7 02:00:47.080178 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:00:47.080190 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 02:00:47.080315 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 02:00:47.080330 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:00:47.080343 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:00:47.080355 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 02:00:47.080366 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 02:00:47.080376 kernel: signal: max sigframe size: 1776 Mar 7 02:00:47.080421 kernel: rcu: Hierarchical SRCU implementation. Mar 7 02:00:47.080440 kernel: rcu: Max phase no-delay instances is 400. Mar 7 02:00:47.080452 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 02:00:47.080463 kernel: smp: Bringing up secondary CPUs ... Mar 7 02:00:47.080475 kernel: smpboot: x86: Booting SMP configuration: Mar 7 02:00:47.080486 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 02:00:47.080497 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 02:00:47.080509 kernel: smpboot: Max logical packages: 1 Mar 7 02:00:47.080520 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 02:00:47.080533 kernel: devtmpfs: initialized Mar 7 02:00:47.080550 kernel: x86/mm: Memory block size: 128MB Mar 7 02:00:47.080561 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 02:00:47.080602 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 02:00:47.080614 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 02:00:47.080626 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 02:00:47.080638 kernel: audit: initializing netlink subsys (disabled) Mar 7 02:00:47.080651 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 02:00:47.080664 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 02:00:47.080676 kernel: audit: type=2000 audit(1772848834.449:1): state=initialized audit_enabled=0 res=1 Mar 7 02:00:47.080696 kernel: cpuidle: using governor menu Mar 7 02:00:47.080708 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 02:00:47.080720 kernel: dca service started, version 1.12.1 Mar 7 02:00:47.080733 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 02:00:47.080746 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 02:00:47.080758 kernel: PCI: Using configuration type 1 for base access Mar 7 02:00:47.080771 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 02:00:47.080783 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 02:00:47.080796 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 02:00:47.080814 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 02:00:47.080828 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 02:00:47.080840 kernel: ACPI: Added _OSI(Module Device) Mar 7 02:00:47.080852 kernel: ACPI: Added _OSI(Processor Device) Mar 7 02:00:47.080864 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 02:00:47.080877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 02:00:47.080890 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 02:00:47.080902 kernel: ACPI: Interpreter enabled Mar 7 02:00:47.080915 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 02:00:47.080934 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 02:00:47.080947 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 02:00:47.080959 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 02:00:47.080972 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 02:00:47.080985 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 02:00:47.081468 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 02:00:47.081712 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 02:00:47.081931 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 02:00:47.081957 kernel: PCI host bridge to bus 0000:00 Mar 7 02:00:47.082339 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 02:00:47.082555 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 02:00:47.082756 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 02:00:47.082947 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 02:00:47.083138 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 02:00:47.083568 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 02:00:47.083762 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 02:00:47.084024 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 02:00:47.084360 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 02:00:47.084597 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 02:00:47.084832 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 02:00:47.085053 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 02:00:47.085372 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 02:00:47.085583 kernel: pci 0000:00:01.0: pci_fixup_video+0x0/0x110 took 12695 usecs Mar 7 02:00:47.085833 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 02:00:47.086068 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 02:00:47.086374 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 02:00:47.086590 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 02:00:47.086814 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 02:00:47.087037 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 02:00:47.087622 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 02:00:47.087861 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 02:00:47.088113 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 02:00:47.088460 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 02:00:47.088901 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 02:00:47.089115 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 02:00:47.089848 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 02:00:47.090093 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 02:00:47.090562 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 02:00:47.090843 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 02:00:47.091072 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 02:00:47.091783 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 02:00:47.092069 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 02:00:47.092501 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 02:00:47.092523 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 02:00:47.092538 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 02:00:47.092551 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 02:00:47.092563 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 02:00:47.092723 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 02:00:47.092735 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 02:00:47.092752 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 02:00:47.092763 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 02:00:47.092774 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 02:00:47.092785 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 02:00:47.092797 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 02:00:47.092809 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 02:00:47.092819 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 02:00:47.092830 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 02:00:47.092841 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 02:00:47.092855 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 02:00:47.092866 kernel: iommu: Default domain type: Translated Mar 7 02:00:47.092877 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 02:00:47.092888 kernel: PCI: Using ACPI for IRQ routing Mar 7 02:00:47.092899 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 02:00:47.092910 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 02:00:47.092920 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 02:00:47.093142 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 02:00:47.093479 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 02:00:47.093936 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 02:00:47.093955 kernel: vgaarb: loaded Mar 7 02:00:47.093968 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 02:00:47.093982 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 02:00:47.093994 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 02:00:47.094008 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 02:00:47.094021 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 02:00:47.094034 kernel: pnp: PnP ACPI init Mar 7 02:00:47.094363 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 02:00:47.094388 kernel: pnp: PnP ACPI: found 6 devices Mar 7 02:00:47.094400 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 02:00:47.094411 kernel: NET: Registered PF_INET protocol family Mar 7 02:00:47.094422 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 02:00:47.094434 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 02:00:47.094448 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 02:00:47.094460 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 02:00:47.094472 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 02:00:47.094489 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 02:00:47.094502 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:00:47.094514 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 02:00:47.094525 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 02:00:47.094536 kernel: NET: Registered PF_XDP protocol family Mar 7 02:00:47.094725 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 02:00:47.094917 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 02:00:47.095123 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 02:00:47.095676 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 02:00:47.096109 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 02:00:47.096453 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 02:00:47.096476 kernel: PCI: CLS 0 bytes, default 64 Mar 7 02:00:47.096488 kernel: Initialise system trusted keyrings Mar 7 02:00:47.096500 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 02:00:47.096514 kernel: Key type asymmetric registered Mar 7 02:00:47.096524 kernel: Asymmetric key parser 'x509' registered Mar 7 02:00:47.096537 kernel: hrtimer: interrupt took 2975706 ns Mar 7 02:00:47.096548 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 02:00:47.096570 kernel: io scheduler mq-deadline registered Mar 7 02:00:47.096581 kernel: io scheduler kyber registered Mar 7 02:00:47.096593 kernel: io scheduler bfq registered Mar 7 02:00:47.096607 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 02:00:47.096620 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 02:00:47.096632 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 02:00:47.096644 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 02:00:47.096658 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 02:00:47.096669 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 02:00:47.096847 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 02:00:47.096861 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 02:00:47.096872 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 02:00:47.097681 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 02:00:47.097706 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 7 02:00:47.100901 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 02:00:47.100934 kernel: hpet: Lost 2 RTC interrupts Mar 7 02:00:47.101301 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T02:00:43 UTC (1772848843) Mar 7 02:00:47.105004 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 02:00:47.105042 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 02:00:47.105058 kernel: NET: Registered PF_INET6 protocol family Mar 7 02:00:47.105070 kernel: Segment Routing with IPv6 Mar 7 02:00:47.105082 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 02:00:47.105095 kernel: NET: Registered PF_PACKET protocol family Mar 7 02:00:47.105106 kernel: Key type dns_resolver registered Mar 7 02:00:47.105118 kernel: IPI shorthand broadcast: enabled Mar 7 02:00:47.105131 kernel: sched_clock: Marking stable (11302028075, 2042162609)->(15388373248, -2044182564) Mar 7 02:00:47.105161 kernel: registered taskstats version 1 Mar 7 02:00:47.105174 kernel: Loading compiled-in X.509 certificates Mar 7 02:00:47.105186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 02:00:47.105302 kernel: Key type .fscrypt registered Mar 7 02:00:47.105317 kernel: Key type fscrypt-provisioning registered Mar 7 02:00:47.105329 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 02:00:47.105341 kernel: ima: Allocated hash algorithm: sha1 Mar 7 02:00:47.105353 kernel: ima: No architecture policies found Mar 7 02:00:47.105375 kernel: clk: Disabling unused clocks Mar 7 02:00:47.105387 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 02:00:47.105402 kernel: Write protecting the kernel read-only data: 36864k Mar 7 02:00:47.105413 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 02:00:47.105425 kernel: Run /init as init process Mar 7 02:00:47.105438 kernel: with arguments: Mar 7 02:00:47.105451 kernel: /init Mar 7 02:00:47.105465 kernel: with environment: Mar 7 02:00:47.105478 kernel: HOME=/ Mar 7 02:00:47.105489 kernel: TERM=linux Mar 7 02:00:47.105512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:00:47.105529 systemd[1]: Detected virtualization kvm. Mar 7 02:00:47.105542 systemd[1]: Detected architecture x86-64. Mar 7 02:00:47.105555 systemd[1]: Running in initrd. Mar 7 02:00:47.105568 systemd[1]: No hostname configured, using default hostname. Mar 7 02:00:47.105581 systemd[1]: Hostname set to . Mar 7 02:00:47.105595 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:00:47.105614 systemd[1]: Queued start job for default target initrd.target. Mar 7 02:00:47.105627 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:00:47.105640 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:00:47.105655 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 02:00:47.105669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:00:47.105681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 02:00:47.105695 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 02:00:47.105717 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 02:00:47.105730 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 02:00:47.105742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:00:47.105757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:00:47.105796 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:00:47.105818 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:00:47.105832 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:00:47.105848 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:00:47.105861 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:00:47.105876 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:00:47.105889 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 02:00:47.105903 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 02:00:47.105918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:00:47.105930 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:00:47.106114 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:00:47.106131 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:00:47.106146 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 02:00:47.106159 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:00:47.106174 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 02:00:47.106187 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 02:00:47.106286 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:00:47.106303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:00:47.106318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:00:47.106387 systemd-journald[195]: Collecting audit messages is disabled. Mar 7 02:00:47.106430 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 02:00:47.106443 systemd-journald[195]: Journal started Mar 7 02:00:47.106477 systemd-journald[195]: Runtime Journal (/run/log/journal/3e6c27a43abd4616a2fc1303dfb93787) is 6.0M, max 48.4M, 42.3M free. Mar 7 02:00:47.119652 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:00:47.136370 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:00:47.151316 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 02:00:47.308118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:00:47.326686 systemd-modules-load[196]: Inserted module 'overlay' Mar 7 02:00:47.464416 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:00:47.505725 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:00:47.578828 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:00:47.878597 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 02:00:47.902613 kernel: Bridge firewalling registered Mar 7 02:00:47.914272 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 7 02:00:48.539992 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:00:48.567962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:00:48.584174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:00:48.643482 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:00:48.784689 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:00:48.825868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:00:48.931815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:00:49.001789 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 02:00:49.028476 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:00:49.097573 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:00:49.195024 dracut-cmdline[229]: dracut-dracut-053 Mar 7 02:00:49.224903 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 02:00:49.348591 systemd-resolved[233]: Positive Trust Anchors: Mar 7 02:00:49.348658 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:00:49.348703 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:00:49.377362 systemd-resolved[233]: Defaulting to hostname 'linux'. Mar 7 02:00:49.383776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:00:49.500430 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:00:50.112517 kernel: SCSI subsystem initialized Mar 7 02:00:50.196020 kernel: Loading iSCSI transport class v2.0-870. Mar 7 02:00:50.299857 kernel: iscsi: registered transport (tcp) Mar 7 02:00:50.395378 kernel: iscsi: registered transport (qla4xxx) Mar 7 02:00:50.395469 kernel: QLogic iSCSI HBA Driver Mar 7 02:00:50.556028 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 02:00:50.607662 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 02:00:50.726825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 02:00:50.726901 kernel: device-mapper: uevent: version 1.0.3 Mar 7 02:00:50.730318 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 02:00:50.877802 kernel: raid6: avx2x4 gen() 9933 MB/s Mar 7 02:00:50.901378 kernel: raid6: avx2x2 gen() 8122 MB/s Mar 7 02:00:50.942003 kernel: raid6: avx2x1 gen() 4694 MB/s Mar 7 02:00:50.947863 kernel: raid6: using algorithm avx2x4 gen() 9933 MB/s Mar 7 02:00:50.973711 kernel: raid6: .... xor() 829 MB/s, rmw enabled Mar 7 02:00:50.973754 kernel: raid6: using avx2x2 recovery algorithm Mar 7 02:00:51.070543 kernel: xor: automatically using best checksumming function avx Mar 7 02:00:52.024845 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 02:00:52.108627 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:00:52.168932 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:00:52.331156 systemd-udevd[416]: Using default interface naming scheme 'v255'. Mar 7 02:00:52.409761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:00:52.472524 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 02:00:52.540836 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 7 02:00:52.654091 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:00:52.711464 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:00:52.989935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:00:53.061046 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 02:00:53.197565 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 02:00:53.279938 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 02:00:53.212652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:00:53.226393 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:00:53.249580 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:00:53.298422 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 02:00:53.316804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:00:53.317006 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:00:53.366665 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:00:53.392602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:00:53.392918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:00:53.424575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:00:53.452832 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 02:00:53.480094 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 02:00:53.498801 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:00:53.523638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 02:00:53.523679 kernel: GPT:9289727 != 19775487 Mar 7 02:00:53.523700 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 02:00:53.523751 kernel: GPT:9289727 != 19775487 Mar 7 02:00:53.523768 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 02:00:53.523792 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:53.537941 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:00:53.597683 kernel: libata version 3.00 loaded. Mar 7 02:00:53.745072 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Mar 7 02:00:53.807037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:00:54.284936 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Mar 7 02:00:54.287428 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 02:00:54.287459 kernel: AES CTR mode by8 optimization enabled Mar 7 02:00:54.287477 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 02:00:54.290351 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 02:00:54.290388 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 02:00:54.290690 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 02:00:54.290931 kernel: scsi host0: ahci Mar 7 02:00:54.297379 kernel: scsi host1: ahci Mar 7 02:00:54.297783 kernel: scsi host2: ahci Mar 7 02:00:54.298093 kernel: scsi host3: ahci Mar 7 02:00:54.301547 kernel: scsi host4: ahci Mar 7 02:00:54.301816 kernel: scsi host5: ahci Mar 7 02:00:54.302106 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 02:00:54.302127 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 02:00:54.302152 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 02:00:54.302170 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 02:00:54.302186 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 02:00:54.302308 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 02:00:54.330570 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:00:54.362840 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 02:00:54.396713 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 02:00:54.427120 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 02:00:54.444137 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 02:00:54.490582 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:54.490665 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:54.498590 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:54.501610 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 02:00:54.534584 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 02:00:54.534635 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:54.511560 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 02:00:54.616701 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 02:00:54.616752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:54.616769 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 02:00:54.616786 kernel: ata3.00: applying bridge limits Mar 7 02:00:54.616803 kernel: ata3.00: configured for UDMA/100 Mar 7 02:00:54.616820 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 02:00:54.616886 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 02:00:54.617357 disk-uuid[557]: Primary Header is updated. Mar 7 02:00:54.617357 disk-uuid[557]: Secondary Entries is updated. Mar 7 02:00:54.617357 disk-uuid[557]: Secondary Header is updated. Mar 7 02:00:54.782449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:00:54.995725 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 02:00:55.004047 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 02:00:55.217459 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 02:00:55.639532 disk-uuid[558]: Warning: The kernel is still using the old partition table. Mar 7 02:00:55.639532 disk-uuid[558]: The new table will be used at the next reboot or after you Mar 7 02:00:55.639532 disk-uuid[558]: run partprobe(8) or kpartx(8) Mar 7 02:00:55.639532 disk-uuid[558]: The operation has completed successfully. Mar 7 02:00:56.251632 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 02:00:56.251992 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 02:00:56.349534 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 02:00:56.378626 sh[593]: Success Mar 7 02:00:56.416319 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 02:00:56.519960 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 02:00:56.562882 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 02:00:56.571861 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 02:00:56.646949 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 02:00:56.647070 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:00:56.647095 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 02:00:56.658597 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 02:00:56.659164 kernel: BTRFS info (device dm-0): using free space tree Mar 7 02:00:56.818987 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 02:00:56.831505 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 02:00:56.863904 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 02:00:56.899108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 02:00:56.971349 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:56.971440 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:00:56.971464 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:00:57.002838 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:00:57.173648 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 02:00:57.202425 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:00:57.295748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 02:00:57.362189 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 02:00:58.036091 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:00:58.213565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:00:58.271432 ignition[717]: Ignition 2.19.0 Mar 7 02:00:58.271508 ignition[717]: Stage: fetch-offline Mar 7 02:00:58.271641 ignition[717]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:58.271714 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:58.272055 ignition[717]: parsed url from cmdline: "" Mar 7 02:00:58.272062 ignition[717]: no config URL provided Mar 7 02:00:58.272072 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 02:00:58.272091 ignition[717]: no config at "/usr/lib/ignition/user.ign" Mar 7 02:00:58.272307 ignition[717]: op(1): [started] loading QEMU firmware config module Mar 7 02:00:58.272354 ignition[717]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 02:00:58.373182 ignition[717]: op(1): [finished] loading QEMU firmware config module Mar 7 02:00:58.440305 systemd-networkd[780]: lo: Link UP Mar 7 02:00:58.440346 systemd-networkd[780]: lo: Gained carrier Mar 7 02:00:58.512697 systemd-networkd[780]: Enumeration completed Mar 7 02:00:58.520843 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:00:58.528484 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:00:58.528492 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:00:58.607730 systemd-networkd[780]: eth0: Link UP Mar 7 02:00:58.607738 systemd-networkd[780]: eth0: Gained carrier Mar 7 02:00:58.607754 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:00:58.641826 systemd[1]: Reached target network.target - Network. Mar 7 02:00:58.692341 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:00:59.407997 ignition[717]: parsing config with SHA512: bff04ba34aa989777ef063051d55e222bcd3f7de62ecc9969f3250cba7cb2fa42d59c5a54828bf0784269f153c23235237a1559366200a9704a2ae49d641aacd Mar 7 02:00:59.468796 unknown[717]: fetched base config from "system" Mar 7 02:00:59.469555 ignition[717]: fetch-offline: fetch-offline passed Mar 7 02:00:59.468822 unknown[717]: fetched user config from "qemu" Mar 7 02:00:59.469748 ignition[717]: Ignition finished successfully Mar 7 02:00:59.478455 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:00:59.488417 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 02:00:59.536542 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 02:00:59.635175 ignition[787]: Ignition 2.19.0 Mar 7 02:00:59.635357 ignition[787]: Stage: kargs Mar 7 02:00:59.635824 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:00:59.648479 systemd-networkd[780]: eth0: Gained IPv6LL Mar 7 02:00:59.635846 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:00:59.655649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 02:00:59.638362 ignition[787]: kargs: kargs passed Mar 7 02:00:59.638439 ignition[787]: Ignition finished successfully Mar 7 02:00:59.777861 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 02:01:00.241149 ignition[795]: Ignition 2.19.0 Mar 7 02:01:00.241251 ignition[795]: Stage: disks Mar 7 02:01:00.262745 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 7 02:01:00.262832 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:01:00.271456 ignition[795]: disks: disks passed Mar 7 02:01:00.273349 ignition[795]: Ignition finished successfully Mar 7 02:01:00.386490 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 02:01:00.415142 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 02:01:00.428023 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 02:01:00.450623 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:01:00.470091 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:01:00.470302 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:01:00.558902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 02:01:00.786656 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 02:01:00.836978 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 02:01:00.914708 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 02:01:01.963141 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 02:01:01.968737 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 02:01:02.018082 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 02:01:02.068471 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:01:02.115800 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 02:01:02.124473 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 02:01:02.124574 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 02:01:02.312047 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 7 02:01:02.312089 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:01:02.312110 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:01:02.312128 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:01:02.124625 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:01:02.380718 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 02:01:02.419463 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:01:02.460985 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 02:01:02.488625 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:01:02.717128 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 02:01:02.774433 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 7 02:01:02.846003 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 02:01:02.920454 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 02:01:03.675584 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 02:01:03.699749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 02:01:03.731834 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 02:01:03.792735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 02:01:03.816905 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:01:03.903951 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 02:01:04.026945 ignition[928]: INFO : Ignition 2.19.0 Mar 7 02:01:04.026945 ignition[928]: INFO : Stage: mount Mar 7 02:01:04.026945 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:01:04.026945 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:01:04.165967 ignition[928]: INFO : mount: mount passed Mar 7 02:01:04.165967 ignition[928]: INFO : Ignition finished successfully Mar 7 02:01:04.047893 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 02:01:04.213075 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 02:01:04.313561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 02:01:04.438704 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Mar 7 02:01:04.472322 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 02:01:04.472415 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 02:01:04.488116 kernel: BTRFS info (device vda6): using free space tree Mar 7 02:01:04.508634 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 02:01:04.513154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 02:01:04.624134 ignition[956]: INFO : Ignition 2.19.0 Mar 7 02:01:04.624134 ignition[956]: INFO : Stage: files Mar 7 02:01:04.624134 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:01:04.624134 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:01:04.674383 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 7 02:01:04.674383 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 02:01:04.674383 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 02:01:04.701746 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 02:01:04.701746 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 02:01:04.701746 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 02:01:04.701746 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:01:04.701746 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 02:01:04.685429 unknown[956]: wrote ssh authorized keys file for user: core Mar 7 02:01:04.864886 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 02:01:05.432739 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 02:01:05.432739 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 02:01:05.432739 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 02:01:05.768816 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 02:01:08.021451 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 02:01:08.021451 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:01:08.066012 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 7 02:01:08.490060 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 02:01:10.974690 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 7 02:01:10.974690 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 7 02:01:11.026761 ignition[956]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 02:01:11.237604 ignition[956]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:01:11.266119 ignition[956]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 02:01:11.266119 ignition[956]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 02:01:11.266119 ignition[956]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 7 02:01:11.266119 ignition[956]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 02:01:11.266119 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:01:11.266119 ignition[956]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 02:01:11.266119 ignition[956]: INFO : files: files passed Mar 7 02:01:11.266119 ignition[956]: INFO : Ignition finished successfully Mar 7 02:01:11.378827 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 02:01:11.460087 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 02:01:11.539802 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 02:01:11.576432 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 02:01:11.576618 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 02:01:11.637770 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 02:01:11.667876 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:01:11.667876 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:01:11.708681 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 02:01:11.707526 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:01:11.745749 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 02:01:11.786771 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 02:01:11.901557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 02:01:11.901799 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 02:01:11.923412 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 02:01:11.987741 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 02:01:12.024807 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 02:01:12.050869 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 02:01:12.195564 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:01:12.305597 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 02:01:12.362554 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:01:12.450133 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:01:12.465504 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 02:01:12.483112 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 02:01:12.485922 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 02:01:12.553720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 02:01:12.565917 systemd[1]: Stopped target basic.target - Basic System. Mar 7 02:01:12.566170 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 02:01:12.617652 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 02:01:12.634884 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 02:01:12.650757 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 02:01:12.676822 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 02:01:12.755551 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 02:01:12.782599 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 02:01:12.791466 systemd[1]: Stopped target swap.target - Swaps. Mar 7 02:01:12.802506 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 02:01:12.802749 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 02:01:12.833625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:01:12.847498 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:01:12.904358 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 02:01:12.904955 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:01:12.944104 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 02:01:12.955945 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 02:01:12.982992 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 02:01:12.984406 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 02:01:13.028435 systemd[1]: Stopped target paths.target - Path Units. Mar 7 02:01:13.042026 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 02:01:13.051154 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:01:13.083075 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 02:01:13.091155 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 02:01:13.102593 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 02:01:13.107114 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 02:01:13.135827 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 02:01:13.136075 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 02:01:13.150688 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 02:01:13.150981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 02:01:13.166551 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 02:01:13.166771 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 02:01:13.209879 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 02:01:13.246809 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 02:01:13.256851 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 02:01:13.292578 ignition[1010]: INFO : Ignition 2.19.0 Mar 7 02:01:13.292578 ignition[1010]: INFO : Stage: umount Mar 7 02:01:13.292578 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 02:01:13.292578 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 02:01:13.292578 ignition[1010]: INFO : umount: umount passed Mar 7 02:01:13.292578 ignition[1010]: INFO : Ignition finished successfully Mar 7 02:01:13.257153 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:01:13.268690 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 02:01:13.268911 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 02:01:13.391327 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 02:01:13.399937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 02:01:13.413632 systemd[1]: Stopped target network.target - Network. Mar 7 02:01:13.424703 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 02:01:13.424855 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 02:01:13.437865 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 02:01:13.438023 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 02:01:13.452629 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 02:01:13.452799 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 02:01:13.467169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 02:01:13.467483 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 02:01:13.496534 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 02:01:13.522991 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 02:01:13.544091 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 02:01:13.555630 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 02:01:13.555865 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 02:01:13.560680 systemd-networkd[780]: eth0: DHCPv6 lease lost Mar 7 02:01:13.585500 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 02:01:13.585789 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 02:01:13.612483 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 02:01:13.616432 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 02:01:13.636926 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 02:01:13.637527 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 02:01:13.663034 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 02:01:13.663171 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:01:13.675990 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 02:01:13.676123 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 02:01:13.779565 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 02:01:13.912651 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 02:01:13.913257 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 02:01:13.958704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 02:01:13.958858 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:01:13.966624 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 02:01:13.966720 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 02:01:13.996609 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 02:01:13.997154 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:01:14.008727 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:01:14.061605 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 02:01:14.072551 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:01:14.104002 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 02:01:14.104182 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 02:01:14.125168 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 02:01:14.125900 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:01:14.153572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 02:01:14.153719 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 02:01:14.190161 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 02:01:14.191589 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 02:01:14.236459 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 02:01:14.236625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 02:01:14.281837 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 02:01:14.290587 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 02:01:14.290693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:01:14.319738 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 02:01:14.319849 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:01:14.323772 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 02:01:14.323875 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:01:14.354883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 02:01:14.355038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:01:14.390121 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 02:01:14.391875 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 02:01:14.439099 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 02:01:14.439648 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 02:01:14.450007 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 02:01:14.498047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 02:01:14.592381 systemd[1]: Switching root. Mar 7 02:01:14.693054 systemd-journald[195]: Journal stopped Mar 7 02:01:20.374774 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 7 02:01:20.374903 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 02:01:20.374935 kernel: SELinux: policy capability open_perms=1 Mar 7 02:01:20.374955 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 02:01:20.374972 kernel: SELinux: policy capability always_check_network=0 Mar 7 02:01:20.374990 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 02:01:20.375010 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 02:01:20.375029 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 02:01:20.375053 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 02:01:20.375080 kernel: audit: type=1403 audit(1772848875.490:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 02:01:20.375111 systemd[1]: Successfully loaded SELinux policy in 124.153ms. Mar 7 02:01:20.375141 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.145ms. Mar 7 02:01:20.375161 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 02:01:20.375190 systemd[1]: Detected virtualization kvm. Mar 7 02:01:20.375347 systemd[1]: Detected architecture x86-64. Mar 7 02:01:20.375369 systemd[1]: Detected first boot. Mar 7 02:01:20.375394 systemd[1]: Initializing machine ID from VM UUID. Mar 7 02:01:20.375412 zram_generator::config[1054]: No configuration found. Mar 7 02:01:20.375431 systemd[1]: Populated /etc with preset unit settings. Mar 7 02:01:20.375450 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 02:01:20.375468 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 02:01:20.375486 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 02:01:20.375510 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 02:01:20.375529 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 02:01:20.375548 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 02:01:20.375617 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 02:01:20.375639 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 02:01:20.375657 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 02:01:20.375676 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 02:01:20.375699 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 02:01:20.375717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 02:01:20.375736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 02:01:20.375755 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 02:01:20.375827 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 02:01:20.375853 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 02:01:20.375883 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 02:01:20.375904 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 02:01:20.375924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 02:01:20.375941 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 02:01:20.375956 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 02:01:20.375974 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 02:01:20.376038 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 02:01:20.376059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 02:01:20.376085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 02:01:20.376105 systemd[1]: Reached target slices.target - Slice Units. Mar 7 02:01:20.376123 systemd[1]: Reached target swap.target - Swaps. Mar 7 02:01:20.376143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 02:01:20.376163 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 02:01:20.376183 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 02:01:20.376282 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 02:01:20.376387 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 02:01:20.376411 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 02:01:20.376430 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 02:01:20.376450 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 02:01:20.376472 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 02:01:20.376492 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:20.376513 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 02:01:20.376533 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 02:01:20.376554 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 02:01:20.376624 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 02:01:20.376645 systemd[1]: Reached target machines.target - Containers. Mar 7 02:01:20.376666 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 02:01:20.376683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:01:20.376702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 02:01:20.376722 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 02:01:20.376740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:01:20.376759 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:01:20.376820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:01:20.376839 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 02:01:20.376856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:01:20.376874 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 02:01:20.376895 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 02:01:20.376912 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 02:01:20.376930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 02:01:20.376949 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 02:01:20.376969 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 02:01:20.377030 kernel: loop: module loaded Mar 7 02:01:20.377052 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 02:01:20.377071 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 02:01:20.377138 systemd-journald[1138]: Collecting audit messages is disabled. Mar 7 02:01:20.377173 systemd-journald[1138]: Journal started Mar 7 02:01:20.377290 systemd-journald[1138]: Runtime Journal (/run/log/journal/3e6c27a43abd4616a2fc1303dfb93787) is 6.0M, max 48.4M, 42.3M free. Mar 7 02:01:18.032455 systemd[1]: Queued start job for default target multi-user.target. Mar 7 02:01:18.115559 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 02:01:18.117817 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 02:01:18.118651 systemd[1]: systemd-journald.service: Consumed 2.088s CPU time. Mar 7 02:01:20.444408 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 02:01:20.493341 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 02:01:20.544881 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 02:01:20.544997 systemd[1]: Stopped verity-setup.service. Mar 7 02:01:20.563479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:20.639473 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 02:01:20.645843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 02:01:20.670099 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 02:01:20.695925 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 02:01:20.711980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 02:01:20.730892 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 02:01:20.746136 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 02:01:20.775748 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 02:01:20.810453 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 02:01:20.819416 kernel: fuse: init (API version 7.39) Mar 7 02:01:20.841658 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 02:01:20.851358 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 02:01:20.881984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:01:20.885892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:01:20.917429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:01:20.919152 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:01:20.948393 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 02:01:20.948697 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 02:01:20.982157 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:01:20.990750 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:01:21.019141 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 02:01:21.049963 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 02:01:21.077884 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 02:01:21.160923 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 02:01:21.263097 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 02:01:21.294837 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 02:01:21.328117 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 02:01:21.328350 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 02:01:21.350431 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 02:01:21.392275 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 02:01:21.438662 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 02:01:21.460134 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:01:21.481089 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 02:01:21.514763 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 02:01:21.542984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:01:21.565903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 02:01:21.593697 kernel: ACPI: bus type drm_connector registered Mar 7 02:01:21.603766 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:01:21.627376 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:01:21.668758 systemd-journald[1138]: Time spent on flushing to /var/log/journal/3e6c27a43abd4616a2fc1303dfb93787 is 43.867ms for 948 entries. Mar 7 02:01:21.668758 systemd-journald[1138]: System Journal (/var/log/journal/3e6c27a43abd4616a2fc1303dfb93787) is 8.0M, max 195.6M, 187.6M free. Mar 7 02:01:21.835475 systemd-journald[1138]: Received client request to flush runtime journal. Mar 7 02:01:21.707830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 02:01:21.796800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 02:01:21.844960 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:01:21.849547 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:01:21.874751 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 02:01:21.885608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 02:01:21.906090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 02:01:21.946148 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 02:01:21.981825 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 02:01:22.017512 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 02:01:22.038285 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 02:01:22.102836 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 02:01:22.170523 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 02:01:22.199144 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 02:01:22.214928 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Mar 7 02:01:22.215612 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Mar 7 02:01:22.227367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:01:22.248989 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 02:01:22.291520 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 02:01:22.314792 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 02:01:22.360176 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 02:01:22.411818 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 02:01:22.424885 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 02:01:22.504379 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 02:01:22.542430 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 02:01:22.553787 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 02:01:22.661840 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 7 02:01:22.661901 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 7 02:01:22.685914 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 02:01:22.810721 kernel: loop2: detected capacity change from 0 to 217752 Mar 7 02:01:22.958373 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 02:01:23.077725 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 02:01:23.165094 kernel: loop5: detected capacity change from 0 to 217752 Mar 7 02:01:23.263975 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 02:01:23.265153 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 7 02:01:23.286042 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 02:01:23.288480 systemd[1]: Reloading... Mar 7 02:01:23.716517 zram_generator::config[1222]: No configuration found. Mar 7 02:01:24.390760 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 02:01:24.392651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:01:24.477964 systemd[1]: Reloading finished in 1188 ms. Mar 7 02:01:24.559185 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 02:01:24.577391 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 02:01:24.595734 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 02:01:24.647028 systemd[1]: Starting ensure-sysext.service... Mar 7 02:01:24.668070 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 02:01:24.683722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 02:01:24.704844 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 7 02:01:24.704870 systemd[1]: Reloading... Mar 7 02:01:24.799623 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 02:01:24.800268 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 02:01:24.804016 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 02:01:24.805425 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 7 02:01:24.807456 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 7 02:01:24.828745 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:01:24.828766 systemd-tmpfiles[1261]: Skipping /boot Mar 7 02:01:24.873351 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Mar 7 02:01:24.896712 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 02:01:24.896772 systemd-tmpfiles[1261]: Skipping /boot Mar 7 02:01:25.050477 zram_generator::config[1291]: No configuration found. Mar 7 02:01:25.539494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1321) Mar 7 02:01:25.645117 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 7 02:01:25.665177 kernel: ACPI: button: Power Button [PWRF] Mar 7 02:01:25.711829 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:01:25.906053 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 02:01:25.926147 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 02:01:25.926780 systemd[1]: Reloading finished in 1219 ms. Mar 7 02:01:25.952381 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 02:01:25.952829 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 02:01:25.970973 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 02:01:26.052982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 02:01:26.137479 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 02:01:26.156853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 02:01:26.270288 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:26.346989 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:01:26.399964 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 02:01:26.419594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:01:26.449039 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:01:26.506520 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:01:26.580660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:01:26.598837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:01:26.622879 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 02:01:26.673028 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 02:01:26.691906 augenrules[1377]: No rules Mar 7 02:01:26.729168 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 02:01:26.804611 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 02:01:26.816159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 02:01:26.843303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 02:01:26.869170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:26.877295 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:01:26.885392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:01:26.885958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:01:26.895752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:01:26.896075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:01:26.906959 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:01:26.908548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:01:26.947937 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 02:01:27.038788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:27.039297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 02:01:27.236952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 02:01:27.257559 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 02:01:27.294599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 02:01:27.319956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 02:01:27.320484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 02:01:27.365817 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 02:01:27.365978 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 02:01:27.368737 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 02:01:27.440412 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 02:01:27.465537 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 02:01:27.481177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 02:01:27.483833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 02:01:27.502824 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 02:01:27.503162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 02:01:27.505867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 02:01:27.506899 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 02:01:27.507936 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 02:01:27.508175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 02:01:27.518765 systemd[1]: Finished ensure-sysext.service. Mar 7 02:01:27.592650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 02:01:27.592821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 02:01:27.697993 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 02:01:28.295129 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 02:01:28.338612 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 02:01:28.339120 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 02:01:28.366268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 02:01:28.408436 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 02:01:28.473109 systemd-networkd[1383]: lo: Link UP Mar 7 02:01:28.473129 systemd-networkd[1383]: lo: Gained carrier Mar 7 02:01:28.480565 systemd-networkd[1383]: Enumeration completed Mar 7 02:01:28.481429 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 02:01:28.488474 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:01:28.488483 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 02:01:28.494161 systemd-networkd[1383]: eth0: Link UP Mar 7 02:01:28.494172 systemd-networkd[1383]: eth0: Gained carrier Mar 7 02:01:28.494281 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 02:01:28.577704 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 02:01:28.637848 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 02:01:28.749692 systemd-resolved[1385]: Positive Trust Anchors: Mar 7 02:01:28.749758 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 02:01:28.749808 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 02:01:28.790967 systemd-resolved[1385]: Defaulting to hostname 'linux'. Mar 7 02:01:28.807923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 02:01:28.835909 systemd[1]: Reached target network.target - Network. Mar 7 02:01:28.859566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 02:01:29.041991 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 02:01:29.049711 systemd-timesyncd[1410]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 02:01:29.058680 systemd-timesyncd[1410]: Initial clock synchronization to Sat 2026-03-07 02:01:29.293305 UTC. Mar 7 02:01:29.080906 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 02:01:29.563607 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 02:01:29.749551 kernel: kvm_amd: TSC scaling supported Mar 7 02:01:29.749682 kernel: kvm_amd: Nested Virtualization enabled Mar 7 02:01:29.753397 kernel: kvm_amd: Nested Paging enabled Mar 7 02:01:29.753485 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 02:01:29.760379 kernel: kvm_amd: PMU virtualization is disabled Mar 7 02:01:30.205974 kernel: EDAC MC: Ver: 3.0.0 Mar 7 02:01:30.254518 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 02:01:30.301537 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 02:01:30.303136 systemd-networkd[1383]: eth0: Gained IPv6LL Mar 7 02:01:30.323290 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 02:01:30.361601 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 02:01:30.425031 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:01:30.565174 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 02:01:30.589391 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 02:01:30.610971 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 02:01:30.628773 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 02:01:30.657031 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 02:01:30.690315 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 02:01:30.714151 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 02:01:30.756770 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 02:01:30.768185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 02:01:30.768692 systemd[1]: Reached target paths.target - Path Units. Mar 7 02:01:30.773978 systemd[1]: Reached target timers.target - Timer Units. Mar 7 02:01:30.783423 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 02:01:30.792968 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 02:01:30.827537 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 02:01:30.901049 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 02:01:30.942050 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 02:01:30.981056 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 02:01:31.008737 systemd[1]: Reached target basic.target - Basic System. Mar 7 02:01:31.034466 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:01:31.034521 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 02:01:31.043553 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 02:01:31.065403 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 02:01:31.092957 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 02:01:31.142746 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 02:01:31.209630 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 02:01:31.245636 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 02:01:31.278753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 02:01:31.285704 jq[1435]: false Mar 7 02:01:31.297568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:01:31.358833 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 02:01:31.407954 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 02:01:31.461039 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 02:01:31.481660 extend-filesystems[1436]: Found loop3 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found loop4 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found loop5 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found sr0 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda1 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda2 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda3 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found usr Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda4 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda6 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda7 Mar 7 02:01:31.481660 extend-filesystems[1436]: Found vda9 Mar 7 02:01:31.481660 extend-filesystems[1436]: Checking size of /dev/vda9 Mar 7 02:01:31.925806 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1302) Mar 7 02:01:31.925867 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 02:01:31.552487 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 02:01:31.578405 dbus-daemon[1434]: [system] SELinux support is enabled Mar 7 02:01:31.926798 extend-filesystems[1436]: Resized partition /dev/vda9 Mar 7 02:01:31.697485 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 02:01:31.957733 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Mar 7 02:01:31.913307 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 02:01:31.967771 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 02:01:31.970910 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 02:01:32.031290 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 02:01:32.033140 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 02:01:32.072755 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 02:01:32.168363 update_engine[1464]: I20260307 02:01:32.165373 1464 main.cc:92] Flatcar Update Engine starting Mar 7 02:01:32.095873 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 02:01:32.129177 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 02:01:32.180353 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 02:01:32.180353 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 02:01:32.180353 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 02:01:32.225579 update_engine[1464]: I20260307 02:01:32.179437 1464 update_check_scheduler.cc:74] Next update check in 4m53s Mar 7 02:01:32.223356 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 02:01:32.225729 jq[1465]: true Mar 7 02:01:32.226159 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Mar 7 02:01:32.223725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 02:01:32.224511 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 02:01:32.224835 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 02:01:32.262458 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 02:01:32.267117 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 02:01:32.278730 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 02:01:32.347767 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 02:01:32.347911 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 02:01:32.349517 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 02:01:32.349859 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 02:01:32.367168 systemd-logind[1461]: New seat seat0. Mar 7 02:01:32.384430 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 02:01:32.493652 jq[1472]: true Mar 7 02:01:32.496419 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 02:01:32.582678 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 02:01:32.589420 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 02:01:32.666837 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 02:01:32.689154 tar[1470]: linux-amd64/LICENSE Mar 7 02:01:32.689154 tar[1470]: linux-amd64/helm Mar 7 02:01:32.750767 systemd[1]: Started update-engine.service - Update Engine. Mar 7 02:01:32.775828 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 02:01:32.776291 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 02:01:32.776517 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 02:01:32.795346 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 02:01:32.800012 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 02:01:32.856418 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 02:01:32.922512 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Mar 7 02:01:32.945330 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 02:01:32.987027 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 02:01:33.241015 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 02:01:33.381393 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 02:01:33.648551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 02:01:33.715003 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 02:01:33.827786 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 02:01:33.828204 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 02:01:33.911468 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 02:01:34.026830 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 02:01:34.052325 containerd[1473]: time="2026-03-07T02:01:34.045200400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 02:01:34.068095 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 02:01:34.178076 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 02:01:34.207748 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 02:01:34.248668 containerd[1473]: time="2026-03-07T02:01:34.248315439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.261381 containerd[1473]: time="2026-03-07T02:01:34.261086130Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:34.261381 containerd[1473]: time="2026-03-07T02:01:34.261189086Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 02:01:34.261381 containerd[1473]: time="2026-03-07T02:01:34.261289044Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 02:01:34.261663 containerd[1473]: time="2026-03-07T02:01:34.261631401Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 02:01:34.261802 containerd[1473]: time="2026-03-07T02:01:34.261665217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.261836 containerd[1473]: time="2026-03-07T02:01:34.261791036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:34.261836 containerd[1473]: time="2026-03-07T02:01:34.261814235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.262287 containerd[1473]: time="2026-03-07T02:01:34.262136137Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:34.262287 containerd[1473]: time="2026-03-07T02:01:34.262161977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.262287 containerd[1473]: time="2026-03-07T02:01:34.262180088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:34.262287 containerd[1473]: time="2026-03-07T02:01:34.262196467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.265072 containerd[1473]: time="2026-03-07T02:01:34.263509185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.265072 containerd[1473]: time="2026-03-07T02:01:34.264154740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 02:01:34.265072 containerd[1473]: time="2026-03-07T02:01:34.264576294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 02:01:34.265072 containerd[1473]: time="2026-03-07T02:01:34.264604765Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 02:01:34.268664 containerd[1473]: time="2026-03-07T02:01:34.266970111Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 02:01:34.268664 containerd[1473]: time="2026-03-07T02:01:34.267134856Z" level=info msg="metadata content store policy set" policy=shared Mar 7 02:01:34.320962 containerd[1473]: time="2026-03-07T02:01:34.306051509Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 02:01:34.320962 containerd[1473]: time="2026-03-07T02:01:34.315378306Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 02:01:34.320962 containerd[1473]: time="2026-03-07T02:01:34.315452167Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 02:01:34.320962 containerd[1473]: time="2026-03-07T02:01:34.315498924Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 02:01:34.320962 containerd[1473]: time="2026-03-07T02:01:34.315533729Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 02:01:34.320962 containerd[1473]: time="2026-03-07T02:01:34.315879288Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 02:01:34.326086 containerd[1473]: time="2026-03-07T02:01:34.325979145Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 02:01:34.344264 containerd[1473]: time="2026-03-07T02:01:34.344046869Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 02:01:34.344264 containerd[1473]: time="2026-03-07T02:01:34.344140769Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 02:01:34.344264 containerd[1473]: time="2026-03-07T02:01:34.344167641Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 02:01:34.344264 containerd[1473]: time="2026-03-07T02:01:34.344191483Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344317067Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344339431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344373674Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344397966Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344419687Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344439103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344456571Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344490408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344516055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344538 containerd[1473]: time="2026-03-07T02:01:34.344537562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344558886Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344574937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344593251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344610252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344628005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344648911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344675537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344696932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344713696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344735611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.344866 containerd[1473]: time="2026-03-07T02:01:34.344757791Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 02:01:34.348801 containerd[1473]: time="2026-03-07T02:01:34.348131996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.348801 containerd[1473]: time="2026-03-07T02:01:34.348303685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.348801 containerd[1473]: time="2026-03-07T02:01:34.348330964Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 02:01:34.349546 containerd[1473]: time="2026-03-07T02:01:34.349484884Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 02:01:34.349546 containerd[1473]: time="2026-03-07T02:01:34.349527756Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 02:01:34.349656 containerd[1473]: time="2026-03-07T02:01:34.349551007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 02:01:34.349656 containerd[1473]: time="2026-03-07T02:01:34.349570953Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 02:01:34.349656 containerd[1473]: time="2026-03-07T02:01:34.349587423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.349656 containerd[1473]: time="2026-03-07T02:01:34.349621778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 02:01:34.349656 containerd[1473]: time="2026-03-07T02:01:34.349647660Z" level=info msg="NRI interface is disabled by configuration." Mar 7 02:01:34.349787 containerd[1473]: time="2026-03-07T02:01:34.349663670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 02:01:34.350307 containerd[1473]: time="2026-03-07T02:01:34.350057608Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 02:01:34.350307 containerd[1473]: time="2026-03-07T02:01:34.350159003Z" level=info msg="Connect containerd service" Mar 7 02:01:34.353384 containerd[1473]: time="2026-03-07T02:01:34.351439397Z" level=info msg="using legacy CRI server" Mar 7 02:01:34.353384 containerd[1473]: time="2026-03-07T02:01:34.351464003Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 02:01:34.353384 containerd[1473]: time="2026-03-07T02:01:34.351630185Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 02:01:34.359570 containerd[1473]: time="2026-03-07T02:01:34.358779008Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:01:34.359925 containerd[1473]: time="2026-03-07T02:01:34.359727506Z" level=info msg="Start subscribing containerd event" Mar 7 02:01:34.359925 containerd[1473]: time="2026-03-07T02:01:34.359786162Z" level=info msg="Start recovering state" Mar 7 02:01:34.363654 containerd[1473]: time="2026-03-07T02:01:34.362497946Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 02:01:34.363654 containerd[1473]: time="2026-03-07T02:01:34.363183913Z" level=info msg="Start event monitor" Mar 7 02:01:34.363654 containerd[1473]: time="2026-03-07T02:01:34.363288899Z" level=info msg="Start snapshots syncer" Mar 7 02:01:34.363654 containerd[1473]: time="2026-03-07T02:01:34.363323470Z" level=info msg="Start cni network conf syncer for default" Mar 7 02:01:34.363654 containerd[1473]: time="2026-03-07T02:01:34.363335839Z" level=info msg="Start streaming server" Mar 7 02:01:34.368707 containerd[1473]: time="2026-03-07T02:01:34.364316981Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 02:01:34.368707 containerd[1473]: time="2026-03-07T02:01:34.364425137Z" level=info msg="containerd successfully booted in 0.341229s" Mar 7 02:01:34.368521 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 02:01:35.738277 tar[1470]: linux-amd64/README.md Mar 7 02:01:35.900341 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 02:01:39.245075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:01:39.273130 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 02:01:39.459071 systemd[1]: Startup finished in 12.094s (kernel) + 30.559s (initrd) + 24.088s (userspace) = 1min 6.742s. Mar 7 02:01:39.476021 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:01:39.862091 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 02:01:39.979961 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:37124.service - OpenSSH per-connection server daemon (10.0.0.1:37124). Mar 7 02:01:40.502265 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 37124 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:40.513653 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:40.764943 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 02:01:40.775714 systemd-logind[1461]: New session 1 of user core. Mar 7 02:01:40.797801 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 02:01:41.181065 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 02:01:41.480691 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 02:01:42.024797 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 02:01:44.983753 systemd[1561]: Queued start job for default target default.target. Mar 7 02:01:45.044464 systemd[1561]: Created slice app.slice - User Application Slice. Mar 7 02:01:45.044555 systemd[1561]: Reached target paths.target - Paths. Mar 7 02:01:45.044578 systemd[1561]: Reached target timers.target - Timers. Mar 7 02:01:45.099352 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 02:01:45.203162 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 02:01:45.203635 systemd[1561]: Reached target sockets.target - Sockets. Mar 7 02:01:45.203667 systemd[1561]: Reached target basic.target - Basic System. Mar 7 02:01:45.203807 systemd[1561]: Reached target default.target - Main User Target. Mar 7 02:01:45.203888 systemd[1561]: Startup finished in 2.859s. Mar 7 02:01:45.205653 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 02:01:45.240393 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 02:01:45.503071 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:55236.service - OpenSSH per-connection server daemon (10.0.0.1:55236). Mar 7 02:01:45.775065 kubelet[1546]: E0307 02:01:45.774572 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:01:45.776138 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 55236 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:45.780072 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:45.792874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:01:45.793439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:01:45.795435 systemd[1]: kubelet.service: Consumed 3.605s CPU time. Mar 7 02:01:45.818411 systemd-logind[1461]: New session 2 of user core. Mar 7 02:01:45.824100 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 02:01:46.007810 sshd[1574]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:46.055762 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:55236.service: Deactivated successfully. Mar 7 02:01:46.063827 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 02:01:46.086449 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Mar 7 02:01:46.131376 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:55244.service - OpenSSH per-connection server daemon (10.0.0.1:55244). Mar 7 02:01:46.164669 systemd-logind[1461]: Removed session 2. Mar 7 02:01:46.315616 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 55244 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:46.325189 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:46.387340 systemd-logind[1461]: New session 3 of user core. Mar 7 02:01:46.403978 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 02:01:46.569811 sshd[1582]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:46.629571 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:55244.service: Deactivated successfully. Mar 7 02:01:46.659860 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 02:01:46.665539 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Mar 7 02:01:46.697984 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:55256.service - OpenSSH per-connection server daemon (10.0.0.1:55256). Mar 7 02:01:46.704022 systemd-logind[1461]: Removed session 3. Mar 7 02:01:46.915851 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 55256 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:46.918510 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:46.987296 systemd-logind[1461]: New session 4 of user core. Mar 7 02:01:47.026066 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 02:01:47.220798 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:47.314423 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:55256.service: Deactivated successfully. Mar 7 02:01:47.330152 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 02:01:47.343645 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Mar 7 02:01:47.398643 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:55272.service - OpenSSH per-connection server daemon (10.0.0.1:55272). Mar 7 02:01:47.409869 systemd-logind[1461]: Removed session 4. Mar 7 02:01:47.644883 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 55272 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:47.650516 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:47.734183 systemd-logind[1461]: New session 5 of user core. Mar 7 02:01:47.742639 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 02:01:48.002590 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 02:01:48.003573 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:01:48.113822 sudo[1599]: pam_unix(sudo:session): session closed for user root Mar 7 02:01:48.142715 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:48.209621 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:55272.service: Deactivated successfully. Mar 7 02:01:48.233181 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 02:01:48.254481 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Mar 7 02:01:48.302908 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:55284.service - OpenSSH per-connection server daemon (10.0.0.1:55284). Mar 7 02:01:48.321940 systemd-logind[1461]: Removed session 5. Mar 7 02:01:48.639414 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 55284 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:48.648508 sshd[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:50.111524 systemd-logind[1461]: New session 6 of user core. Mar 7 02:01:50.169651 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 02:01:50.720824 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 02:01:50.724021 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:01:50.746817 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 7 02:01:50.760579 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 02:01:50.761302 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:01:50.969026 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 02:01:51.012557 auditctl[1611]: No rules Mar 7 02:01:51.028638 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 02:01:51.029298 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 02:01:51.074723 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 02:01:51.672177 augenrules[1629]: No rules Mar 7 02:01:51.712033 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 02:01:51.729748 sudo[1607]: pam_unix(sudo:session): session closed for user root Mar 7 02:01:52.225793 sshd[1604]: pam_unix(sshd:session): session closed for user core Mar 7 02:01:52.700347 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:55284.service: Deactivated successfully. Mar 7 02:01:52.721912 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 02:01:52.773914 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Mar 7 02:01:52.831596 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:34586.service - OpenSSH per-connection server daemon (10.0.0.1:34586). Mar 7 02:01:52.847944 systemd-logind[1461]: Removed session 6. Mar 7 02:01:53.176044 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 34586 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:01:53.225899 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:01:53.301979 systemd-logind[1461]: New session 7 of user core. Mar 7 02:01:53.335729 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 02:01:53.586170 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 02:01:53.597843 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 02:01:56.080691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 02:02:00.474099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:02.968943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:03.427996 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:05.428545 kubelet[1660]: E0307 02:02:05.426776 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:05.458840 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:05.459947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:05.462151 systemd[1]: kubelet.service: Consumed 2.333s CPU time. Mar 7 02:02:11.055389 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 02:02:11.058629 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 02:02:13.058760 dockerd[1677]: time="2026-03-07T02:02:13.058414559Z" level=info msg="Starting up" Mar 7 02:02:14.242679 dockerd[1677]: time="2026-03-07T02:02:14.239392125Z" level=info msg="Loading containers: start." Mar 7 02:02:15.630799 kernel: Initializing XFRM netlink socket Mar 7 02:02:15.679295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 02:02:15.727295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:17.292852 systemd-networkd[1383]: docker0: Link UP Mar 7 02:02:17.345977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:17.379174 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:17.718871 update_engine[1464]: I20260307 02:02:17.718626 1464 update_attempter.cc:509] Updating boot flags... Mar 7 02:02:17.857377 dockerd[1677]: time="2026-03-07T02:02:17.851558443Z" level=info msg="Loading containers: done." Mar 7 02:02:18.021436 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1802) Mar 7 02:02:19.046523 dockerd[1677]: time="2026-03-07T02:02:19.046406561Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 02:02:19.055335 dockerd[1677]: time="2026-03-07T02:02:19.055187192Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 02:02:19.055620 dockerd[1677]: time="2026-03-07T02:02:19.055514717Z" level=info msg="Daemon has completed initialization" Mar 7 02:02:19.193614 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1753) Mar 7 02:02:19.485180 kubelet[1783]: E0307 02:02:19.484855 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:19.568477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:19.569567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:19.577063 systemd[1]: kubelet.service: Consumed 1.844s CPU time. Mar 7 02:02:19.838020 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 02:02:19.855519 dockerd[1677]: time="2026-03-07T02:02:19.840960860Z" level=info msg="API listen on /run/docker.sock" Mar 7 02:02:26.038588 containerd[1473]: time="2026-03-07T02:02:26.038144548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 02:02:29.691117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3398431261.mount: Deactivated successfully. Mar 7 02:02:29.715519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 02:02:29.823508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:31.175119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:31.195330 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:32.846138 kubelet[1874]: E0307 02:02:32.842468 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:32.859004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:32.863054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:32.865931 systemd[1]: kubelet.service: Consumed 1.536s CPU time. Mar 7 02:02:40.443986 containerd[1473]: time="2026-03-07T02:02:40.440882942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:40.447765 containerd[1473]: time="2026-03-07T02:02:40.445077260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 7 02:02:40.449139 containerd[1473]: time="2026-03-07T02:02:40.448874010Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:40.467829 containerd[1473]: time="2026-03-07T02:02:40.467735212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:40.473976 containerd[1473]: time="2026-03-07T02:02:40.473869324Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 14.435503546s" Mar 7 02:02:40.473976 containerd[1473]: time="2026-03-07T02:02:40.473965151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 7 02:02:40.478761 containerd[1473]: time="2026-03-07T02:02:40.478119303Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 02:02:42.925459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 02:02:42.938944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:43.761513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:43.795949 (kubelet)[1941]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:44.061914 kubelet[1941]: E0307 02:02:44.061383 1941 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:44.077561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:44.077898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:45.693288 containerd[1473]: time="2026-03-07T02:02:45.691122893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:45.704047 containerd[1473]: time="2026-03-07T02:02:45.702815415Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 7 02:02:45.712477 containerd[1473]: time="2026-03-07T02:02:45.709378864Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:45.746072 containerd[1473]: time="2026-03-07T02:02:45.743500327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:45.746679 containerd[1473]: time="2026-03-07T02:02:45.746349803Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 5.268174671s" Mar 7 02:02:45.746679 containerd[1473]: time="2026-03-07T02:02:45.746399339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 7 02:02:45.748132 containerd[1473]: time="2026-03-07T02:02:45.747850426Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 02:02:54.321946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 02:02:54.477810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:02:57.296055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:02:57.554134 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:02:58.443558 kubelet[1961]: E0307 02:02:58.441330 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:02:58.489760 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:02:58.490700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:02:58.498756 systemd[1]: kubelet.service: Consumed 1.399s CPU time. Mar 7 02:02:59.070681 containerd[1473]: time="2026-03-07T02:02:59.068159316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:59.074882 containerd[1473]: time="2026-03-07T02:02:59.074551649Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 7 02:02:59.079855 containerd[1473]: time="2026-03-07T02:02:59.079555753Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:59.113277 containerd[1473]: time="2026-03-07T02:02:59.112682079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:02:59.117354 containerd[1473]: time="2026-03-07T02:02:59.114552092Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 13.366654906s" Mar 7 02:02:59.117354 containerd[1473]: time="2026-03-07T02:02:59.114630793Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 7 02:02:59.145613 containerd[1473]: time="2026-03-07T02:02:59.144802152Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 02:03:03.135990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522152969.mount: Deactivated successfully. Mar 7 02:03:05.170042 containerd[1473]: time="2026-03-07T02:03:05.168126764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:05.184882 containerd[1473]: time="2026-03-07T02:03:05.182030466Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 7 02:03:05.206081 containerd[1473]: time="2026-03-07T02:03:05.194062719Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:05.213011 containerd[1473]: time="2026-03-07T02:03:05.212519056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:05.220968 containerd[1473]: time="2026-03-07T02:03:05.218588388Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 6.073725068s" Mar 7 02:03:05.220968 containerd[1473]: time="2026-03-07T02:03:05.218654796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 7 02:03:05.225279 containerd[1473]: time="2026-03-07T02:03:05.225164323Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 02:03:06.222973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3754709081.mount: Deactivated successfully. Mar 7 02:03:08.674012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 02:03:08.719902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:09.446026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:09.456342 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:03:09.699860 kubelet[2043]: E0307 02:03:09.698173 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:03:09.707663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:03:09.707975 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:03:12.048656 containerd[1473]: time="2026-03-07T02:03:12.047824719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:12.053839 containerd[1473]: time="2026-03-07T02:03:12.052186135Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 7 02:03:12.056447 containerd[1473]: time="2026-03-07T02:03:12.056365569Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:12.077804 containerd[1473]: time="2026-03-07T02:03:12.075270552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:12.085897 containerd[1473]: time="2026-03-07T02:03:12.085267704Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 6.859729079s" Mar 7 02:03:12.085897 containerd[1473]: time="2026-03-07T02:03:12.085370131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 7 02:03:12.090714 containerd[1473]: time="2026-03-07T02:03:12.086424184Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 02:03:12.912747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236549247.mount: Deactivated successfully. Mar 7 02:03:12.962156 containerd[1473]: time="2026-03-07T02:03:12.956765994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:12.964033 containerd[1473]: time="2026-03-07T02:03:12.963499364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 02:03:12.971675 containerd[1473]: time="2026-03-07T02:03:12.968099456Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:12.984510 containerd[1473]: time="2026-03-07T02:03:12.983780980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:12.994762 containerd[1473]: time="2026-03-07T02:03:12.994696228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 907.370638ms" Mar 7 02:03:12.995126 containerd[1473]: time="2026-03-07T02:03:12.994955773Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 02:03:12.997156 containerd[1473]: time="2026-03-07T02:03:12.996870048Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 02:03:16.534159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1544838482.mount: Deactivated successfully. Mar 7 02:03:20.135891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 02:03:20.656131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:26.128818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:26.130831 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:03:26.361342 kubelet[2077]: E0307 02:03:26.360795 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:03:26.372088 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:03:26.375023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:03:26.379086 systemd[1]: kubelet.service: Consumed 1.359s CPU time. Mar 7 02:03:31.207633 containerd[1473]: time="2026-03-07T02:03:31.206938351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:31.217427 containerd[1473]: time="2026-03-07T02:03:31.217074878Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 7 02:03:31.220536 containerd[1473]: time="2026-03-07T02:03:31.220340934Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:31.233576 containerd[1473]: time="2026-03-07T02:03:31.230334354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:03:31.233576 containerd[1473]: time="2026-03-07T02:03:31.232096812Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 18.235181571s" Mar 7 02:03:31.233576 containerd[1473]: time="2026-03-07T02:03:31.232184827Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 7 02:03:36.435977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 02:03:36.457048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:38.036885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:38.038259 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 02:03:38.474330 kubelet[2169]: E0307 02:03:38.473087 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 02:03:38.488724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 02:03:38.489050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 02:03:40.826625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:40.897470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:41.175473 systemd[1]: Reloading requested from client PID 2186 ('systemctl') (unit session-7.scope)... Mar 7 02:03:41.175552 systemd[1]: Reloading... Mar 7 02:03:41.761416 zram_generator::config[2228]: No configuration found. Mar 7 02:03:42.379971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:03:42.694947 systemd[1]: Reloading finished in 1513 ms. Mar 7 02:03:43.195045 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 02:03:43.195572 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 02:03:43.199684 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:43.253724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:03:44.150867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:03:44.205783 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:03:44.477864 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:03:45.561644 kubelet[2271]: I0307 02:03:45.559651 2271 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:03:45.561644 kubelet[2271]: I0307 02:03:45.559882 2271 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:03:45.561644 kubelet[2271]: I0307 02:03:45.559907 2271 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:03:45.561644 kubelet[2271]: I0307 02:03:45.559915 2271 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:03:45.561644 kubelet[2271]: I0307 02:03:45.560386 2271 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:03:45.636543 kubelet[2271]: E0307 02:03:45.633961 2271 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:03:45.638058 kubelet[2271]: I0307 02:03:45.637124 2271 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:03:45.655584 kubelet[2271]: E0307 02:03:45.651614 2271 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:03:45.655584 kubelet[2271]: I0307 02:03:45.651695 2271 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:03:45.836308 kubelet[2271]: I0307 02:03:45.832894 2271 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:03:45.865599 kubelet[2271]: I0307 02:03:45.863826 2271 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:03:45.865599 kubelet[2271]: I0307 02:03:45.863899 2271 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:03:45.865599 kubelet[2271]: I0307 02:03:45.864366 2271 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:03:45.865599 kubelet[2271]: I0307 02:03:45.864380 2271 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:03:45.866724 kubelet[2271]: I0307 02:03:45.864870 2271 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:03:45.890034 kubelet[2271]: I0307 02:03:45.888606 2271 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:03:45.916817 kubelet[2271]: I0307 02:03:45.912353 2271 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:03:45.916817 kubelet[2271]: I0307 02:03:45.912506 2271 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:03:45.916817 kubelet[2271]: I0307 02:03:45.912560 2271 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:03:45.916817 kubelet[2271]: I0307 02:03:45.912585 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:03:46.113442 kubelet[2271]: I0307 02:03:46.098384 2271 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:03:46.152157 kubelet[2271]: I0307 02:03:46.148863 2271 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:03:46.152157 kubelet[2271]: I0307 02:03:46.148931 2271 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:03:46.152157 kubelet[2271]: W0307 02:03:46.149119 2271 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 02:03:46.178451 kubelet[2271]: I0307 02:03:46.177540 2271 server.go:1257] "Started kubelet" Mar 7 02:03:46.194327 kubelet[2271]: I0307 02:03:46.187681 2271 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:03:46.194327 kubelet[2271]: I0307 02:03:46.192334 2271 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:03:46.195347 kubelet[2271]: I0307 02:03:46.195317 2271 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:03:46.203450 kubelet[2271]: I0307 02:03:46.198400 2271 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:03:46.226653 kubelet[2271]: I0307 02:03:46.206390 2271 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:03:46.226653 kubelet[2271]: E0307 02:03:46.207764 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:46.226653 kubelet[2271]: E0307 02:03:46.208987 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6ccbddb38675 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 02:03:46.177402485 +0000 UTC m=+1.942653566,LastTimestamp:2026-03-07 02:03:46.177402485 +0000 UTC m=+1.942653566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 02:03:46.226653 kubelet[2271]: E0307 02:03:46.214403 2271 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="200ms" Mar 7 02:03:46.226653 kubelet[2271]: I0307 02:03:46.215064 2271 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:03:46.226653 kubelet[2271]: I0307 02:03:46.215993 2271 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:03:46.226653 kubelet[2271]: I0307 02:03:46.217160 2271 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:03:46.226653 kubelet[2271]: I0307 02:03:46.217419 2271 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:03:46.226653 kubelet[2271]: I0307 02:03:46.219120 2271 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:03:46.227601 kubelet[2271]: I0307 02:03:46.219356 2271 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:03:46.227601 kubelet[2271]: I0307 02:03:46.220415 2271 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:03:46.227601 kubelet[2271]: E0307 02:03:46.221935 2271 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 02:03:46.270741 kubelet[2271]: I0307 02:03:46.268079 2271 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:03:46.280586 kubelet[2271]: I0307 02:03:46.278271 2271 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:03:46.320367 kubelet[2271]: E0307 02:03:46.319325 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:46.772407 kubelet[2271]: E0307 02:03:46.729937 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:46.790797 kubelet[2271]: E0307 02:03:46.789473 2271 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="400ms" Mar 7 02:03:46.921038 kubelet[2271]: E0307 02:03:46.920386 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.028072 kubelet[2271]: E0307 02:03:47.026632 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.043019 kubelet[2271]: I0307 02:03:47.042977 2271 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:03:47.045532 kubelet[2271]: I0307 02:03:47.043359 2271 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:03:47.045532 kubelet[2271]: I0307 02:03:47.043448 2271 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:03:47.125700 kubelet[2271]: I0307 02:03:47.125406 2271 policy_none.go:50] "Start" Mar 7 02:03:47.125700 kubelet[2271]: I0307 02:03:47.125498 2271 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:03:47.125700 kubelet[2271]: I0307 02:03:47.125613 2271 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:03:47.133509 kubelet[2271]: E0307 02:03:47.133315 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.147382 kubelet[2271]: I0307 02:03:47.142487 2271 policy_none.go:44] "Start" Mar 7 02:03:47.237645 kubelet[2271]: E0307 02:03:47.237351 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.245463 kubelet[2271]: E0307 02:03:47.239125 2271 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="800ms" Mar 7 02:03:47.276381 kubelet[2271]: I0307 02:03:47.276086 2271 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:03:47.276381 kubelet[2271]: I0307 02:03:47.276165 2271 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:03:47.276381 kubelet[2271]: I0307 02:03:47.276321 2271 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:03:47.276585 kubelet[2271]: E0307 02:03:47.276420 2271 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:03:47.277710 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 02:03:47.337938 kubelet[2271]: E0307 02:03:47.337871 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.349607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 02:03:47.378344 kubelet[2271]: E0307 02:03:47.376597 2271 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:03:47.440712 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 02:03:47.459542 kubelet[2271]: E0307 02:03:47.458839 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.562400 kubelet[2271]: E0307 02:03:47.560179 2271 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 02:03:47.576343 kubelet[2271]: E0307 02:03:47.575745 2271 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:03:47.581961 kubelet[2271]: I0307 02:03:47.577691 2271 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:03:47.581961 kubelet[2271]: I0307 02:03:47.578533 2271 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:03:47.584489 kubelet[2271]: E0307 02:03:47.578030 2271 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 02:03:47.584489 kubelet[2271]: I0307 02:03:47.583883 2271 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:03:47.596576 kubelet[2271]: E0307 02:03:47.596470 2271 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:03:47.596576 kubelet[2271]: E0307 02:03:47.596560 2271 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:03:47.708441 kubelet[2271]: I0307 02:03:47.706921 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:47.712696 kubelet[2271]: E0307 02:03:47.709620 2271 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Mar 7 02:03:47.859699 kubelet[2271]: E0307 02:03:47.838113 2271 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:03:47.960860 kubelet[2271]: I0307 02:03:47.957777 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:47.960860 kubelet[2271]: E0307 02:03:47.958733 2271 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Mar 7 02:03:48.049725 kubelet[2271]: E0307 02:03:48.048430 2271 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="1.6s" Mar 7 02:03:48.060064 kubelet[2271]: I0307 02:03:48.059705 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:03:48.060064 kubelet[2271]: I0307 02:03:48.059758 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63f1ad09446b4b4d17027aa63481ecc6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"63f1ad09446b4b4d17027aa63481ecc6\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:48.060064 kubelet[2271]: I0307 02:03:48.059786 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63f1ad09446b4b4d17027aa63481ecc6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"63f1ad09446b4b4d17027aa63481ecc6\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:48.060064 kubelet[2271]: I0307 02:03:48.059843 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63f1ad09446b4b4d17027aa63481ecc6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"63f1ad09446b4b4d17027aa63481ecc6\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:03:48.087330 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 7 02:03:48.124504 kubelet[2271]: E0307 02:03:48.121563 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:48.167567 kubelet[2271]: I0307 02:03:48.166061 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:48.167567 kubelet[2271]: I0307 02:03:48.166633 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:48.167567 kubelet[2271]: I0307 02:03:48.166677 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:48.167567 kubelet[2271]: I0307 02:03:48.166715 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:48.167567 kubelet[2271]: I0307 02:03:48.166832 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:03:48.176044 systemd[1]: Created slice kubepods-burstable-pod63f1ad09446b4b4d17027aa63481ecc6.slice - libcontainer container kubepods-burstable-pod63f1ad09446b4b4d17027aa63481ecc6.slice. Mar 7 02:03:48.213699 kubelet[2271]: E0307 02:03:48.213656 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:48.231599 kubelet[2271]: E0307 02:03:48.223462 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:48.230084 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 7 02:03:48.231910 containerd[1473]: time="2026-03-07T02:03:48.225465360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:63f1ad09446b4b4d17027aa63481ecc6,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:48.247699 kubelet[2271]: E0307 02:03:48.247599 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:48.366370 kubelet[2271]: I0307 02:03:48.362179 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:48.366370 kubelet[2271]: E0307 02:03:48.362969 2271 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Mar 7 02:03:48.450422 kubelet[2271]: E0307 02:03:48.450158 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:48.460609 containerd[1473]: time="2026-03-07T02:03:48.460413219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:48.568847 kubelet[2271]: E0307 02:03:48.566714 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:48.569081 containerd[1473]: time="2026-03-07T02:03:48.568436012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 7 02:03:49.168881 kubelet[2271]: I0307 02:03:49.168407 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:49.168881 kubelet[2271]: E0307 02:03:49.168774 2271 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Mar 7 02:03:49.448387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206024991.mount: Deactivated successfully. Mar 7 02:03:49.491966 containerd[1473]: time="2026-03-07T02:03:49.491812365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:49.517560 containerd[1473]: time="2026-03-07T02:03:49.516693952Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 02:03:49.522328 containerd[1473]: time="2026-03-07T02:03:49.522100945Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:49.533508 containerd[1473]: time="2026-03-07T02:03:49.533373996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:03:49.538306 containerd[1473]: time="2026-03-07T02:03:49.536585939Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:49.557520 containerd[1473]: time="2026-03-07T02:03:49.553390636Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:49.566362 containerd[1473]: time="2026-03-07T02:03:49.566077953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 02:03:49.576616 containerd[1473]: time="2026-03-07T02:03:49.572667448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.004119947s" Mar 7 02:03:49.581396 containerd[1473]: time="2026-03-07T02:03:49.579418676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 02:03:49.591327 containerd[1473]: time="2026-03-07T02:03:49.589832734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.364176826s" Mar 7 02:03:49.591327 containerd[1473]: time="2026-03-07T02:03:49.590869100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.13029566s" Mar 7 02:03:49.655318 kubelet[2271]: E0307 02:03:49.655096 2271 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="3.2s" Mar 7 02:03:50.785600 kubelet[2271]: I0307 02:03:50.784889 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:50.785600 kubelet[2271]: E0307 02:03:50.785686 2271 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Mar 7 02:03:51.017722 containerd[1473]: time="2026-03-07T02:03:51.017541479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:51.023716 containerd[1473]: time="2026-03-07T02:03:51.023443812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:51.074327 containerd[1473]: time="2026-03-07T02:03:51.037397536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:51.074327 containerd[1473]: time="2026-03-07T02:03:51.037700626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:51.074656 containerd[1473]: time="2026-03-07T02:03:51.070628551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:51.074656 containerd[1473]: time="2026-03-07T02:03:51.070707048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:51.074656 containerd[1473]: time="2026-03-07T02:03:51.070727096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:51.074656 containerd[1473]: time="2026-03-07T02:03:51.070846871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:51.092035 containerd[1473]: time="2026-03-07T02:03:51.088716033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:03:51.092035 containerd[1473]: time="2026-03-07T02:03:51.088799440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:03:51.092035 containerd[1473]: time="2026-03-07T02:03:51.088847811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:51.092035 containerd[1473]: time="2026-03-07T02:03:51.089003113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:03:51.301729 systemd[1]: Started cri-containerd-47c803087e06498869ff1a62ac5be9390de894745969b6f9c6b36af6cf26c26b.scope - libcontainer container 47c803087e06498869ff1a62ac5be9390de894745969b6f9c6b36af6cf26c26b. Mar 7 02:03:51.320147 systemd[1]: Started cri-containerd-a41987ab1c8c13a29a9b8ffb91a1cb5dbf3ea375a48bf9b659abc66fe8ac877c.scope - libcontainer container a41987ab1c8c13a29a9b8ffb91a1cb5dbf3ea375a48bf9b659abc66fe8ac877c. Mar 7 02:03:51.335828 systemd[1]: Started cri-containerd-5fdde58e9fd49b7ca9a12e3a5fc3ae5bdc7263446cb79247b0d9ed9b694ce195.scope - libcontainer container 5fdde58e9fd49b7ca9a12e3a5fc3ae5bdc7263446cb79247b0d9ed9b694ce195. Mar 7 02:03:51.591015 containerd[1473]: time="2026-03-07T02:03:51.580187929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:63f1ad09446b4b4d17027aa63481ecc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a41987ab1c8c13a29a9b8ffb91a1cb5dbf3ea375a48bf9b659abc66fe8ac877c\"" Mar 7 02:03:51.597460 kubelet[2271]: E0307 02:03:51.594631 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:51.660970 containerd[1473]: time="2026-03-07T02:03:51.660803216Z" level=info msg="CreateContainer within sandbox \"a41987ab1c8c13a29a9b8ffb91a1cb5dbf3ea375a48bf9b659abc66fe8ac877c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 02:03:51.669302 containerd[1473]: time="2026-03-07T02:03:51.668141501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"47c803087e06498869ff1a62ac5be9390de894745969b6f9c6b36af6cf26c26b\"" Mar 7 02:03:51.671503 kubelet[2271]: E0307 02:03:51.671050 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:51.677500 containerd[1473]: time="2026-03-07T02:03:51.677449000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fdde58e9fd49b7ca9a12e3a5fc3ae5bdc7263446cb79247b0d9ed9b694ce195\"" Mar 7 02:03:51.679423 kubelet[2271]: E0307 02:03:51.678714 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:51.695610 containerd[1473]: time="2026-03-07T02:03:51.695461927Z" level=info msg="CreateContainer within sandbox \"47c803087e06498869ff1a62ac5be9390de894745969b6f9c6b36af6cf26c26b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 02:03:51.708704 containerd[1473]: time="2026-03-07T02:03:51.706634862Z" level=info msg="CreateContainer within sandbox \"5fdde58e9fd49b7ca9a12e3a5fc3ae5bdc7263446cb79247b0d9ed9b694ce195\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 02:03:51.720458 containerd[1473]: time="2026-03-07T02:03:51.719774489Z" level=info msg="CreateContainer within sandbox \"a41987ab1c8c13a29a9b8ffb91a1cb5dbf3ea375a48bf9b659abc66fe8ac877c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"715794ed254ce0c6a297434d1ed32d8ad90daef07ab6fdb58ba6ce9b081e89bb\"" Mar 7 02:03:51.722603 containerd[1473]: time="2026-03-07T02:03:51.722561316Z" level=info msg="StartContainer for \"715794ed254ce0c6a297434d1ed32d8ad90daef07ab6fdb58ba6ce9b081e89bb\"" Mar 7 02:03:51.787340 containerd[1473]: time="2026-03-07T02:03:51.787086094Z" level=info msg="CreateContainer within sandbox \"5fdde58e9fd49b7ca9a12e3a5fc3ae5bdc7263446cb79247b0d9ed9b694ce195\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d4e7bf674459f7e94d2ced63c270ae8bfa7f6db0b28385f6a3cd2781ccd8abc7\"" Mar 7 02:03:51.791606 containerd[1473]: time="2026-03-07T02:03:51.788338113Z" level=info msg="StartContainer for \"d4e7bf674459f7e94d2ced63c270ae8bfa7f6db0b28385f6a3cd2781ccd8abc7\"" Mar 7 02:03:51.821471 containerd[1473]: time="2026-03-07T02:03:51.821054600Z" level=info msg="CreateContainer within sandbox \"47c803087e06498869ff1a62ac5be9390de894745969b6f9c6b36af6cf26c26b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d19cb1307e5de4de967cece7b4cad305f0ff950df664c213f6369fdd5c4708b0\"" Mar 7 02:03:51.827569 containerd[1473]: time="2026-03-07T02:03:51.824982025Z" level=info msg="StartContainer for \"d19cb1307e5de4de967cece7b4cad305f0ff950df664c213f6369fdd5c4708b0\"" Mar 7 02:03:51.846839 systemd[1]: Started cri-containerd-715794ed254ce0c6a297434d1ed32d8ad90daef07ab6fdb58ba6ce9b081e89bb.scope - libcontainer container 715794ed254ce0c6a297434d1ed32d8ad90daef07ab6fdb58ba6ce9b081e89bb. Mar 7 02:03:51.892418 kubelet[2271]: E0307 02:03:51.887764 2271 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 02:03:51.974972 systemd[1]: Started cri-containerd-d4e7bf674459f7e94d2ced63c270ae8bfa7f6db0b28385f6a3cd2781ccd8abc7.scope - libcontainer container d4e7bf674459f7e94d2ced63c270ae8bfa7f6db0b28385f6a3cd2781ccd8abc7. Mar 7 02:03:52.019738 systemd[1]: Started cri-containerd-d19cb1307e5de4de967cece7b4cad305f0ff950df664c213f6369fdd5c4708b0.scope - libcontainer container d19cb1307e5de4de967cece7b4cad305f0ff950df664c213f6369fdd5c4708b0. Mar 7 02:03:52.130682 containerd[1473]: time="2026-03-07T02:03:52.130010225Z" level=info msg="StartContainer for \"715794ed254ce0c6a297434d1ed32d8ad90daef07ab6fdb58ba6ce9b081e89bb\" returns successfully" Mar 7 02:03:52.288426 containerd[1473]: time="2026-03-07T02:03:52.286021524Z" level=info msg="StartContainer for \"d19cb1307e5de4de967cece7b4cad305f0ff950df664c213f6369fdd5c4708b0\" returns successfully" Mar 7 02:03:52.288426 containerd[1473]: time="2026-03-07T02:03:52.286040403Z" level=info msg="StartContainer for \"d4e7bf674459f7e94d2ced63c270ae8bfa7f6db0b28385f6a3cd2781ccd8abc7\" returns successfully" Mar 7 02:03:52.792998 kubelet[2271]: E0307 02:03:52.788805 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:52.812096 kubelet[2271]: E0307 02:03:52.812002 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:52.835401 kubelet[2271]: E0307 02:03:52.822023 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:52.837079 kubelet[2271]: E0307 02:03:52.837037 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:52.849867 kubelet[2271]: E0307 02:03:52.849825 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:52.850336 kubelet[2271]: E0307 02:03:52.850301 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:54.079477 kubelet[2271]: E0307 02:03:54.077698 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:54.079477 kubelet[2271]: E0307 02:03:54.079364 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:54.222953 kubelet[2271]: I0307 02:03:54.197973 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:03:54.222953 kubelet[2271]: E0307 02:03:54.214836 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:54.417979 kubelet[2271]: E0307 02:03:54.390900 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:54.949588 kubelet[2271]: E0307 02:03:54.947986 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:54.949588 kubelet[2271]: E0307 02:03:54.948280 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:03:57.604738 kubelet[2271]: E0307 02:03:57.600860 2271 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:03:58.765401 kubelet[2271]: E0307 02:03:58.764506 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:03:58.765401 kubelet[2271]: E0307 02:03:58.764887 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:01.309502 kubelet[2271]: E0307 02:04:01.298301 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:04:01.309502 kubelet[2271]: E0307 02:04:01.310291 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:02.867855 kubelet[2271]: E0307 02:04:02.860511 2271 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="6.4s" Mar 7 02:04:04.518002 kubelet[2271]: E0307 02:04:04.515311 2271 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 7 02:04:05.761878 kubelet[2271]: E0307 02:04:05.759958 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a6ccbddb38675 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 02:03:46.177402485 +0000 UTC m=+1.942653566,LastTimestamp:2026-03-07 02:03:46.177402485 +0000 UTC m=+1.942653566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 02:04:07.604676 kubelet[2271]: E0307 02:04:07.602351 2271 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 02:04:08.785731 kubelet[2271]: E0307 02:04:08.780869 2271 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 02:04:08.785731 kubelet[2271]: E0307 02:04:08.781178 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:10.233087 kubelet[2271]: E0307 02:04:10.232552 2271 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 7 02:04:10.843758 kubelet[2271]: I0307 02:04:10.833758 2271 apiserver.go:52] "Watching apiserver" Mar 7 02:04:10.940834 kubelet[2271]: I0307 02:04:10.940539 2271 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:04:11.072299 kubelet[2271]: I0307 02:04:11.069457 2271 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:04:11.128335 kubelet[2271]: I0307 02:04:11.127345 2271 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:04:11.156812 kubelet[2271]: I0307 02:04:11.153428 2271 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:04:11.217831 kubelet[2271]: I0307 02:04:11.217078 2271 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:04:11.450833 kubelet[2271]: E0307 02:04:11.447772 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:11.462848 kubelet[2271]: I0307 02:04:11.455347 2271 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:11.466927 kubelet[2271]: E0307 02:04:11.464927 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:11.897032 kubelet[2271]: E0307 02:04:11.890860 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:12.016515 kubelet[2271]: I0307 02:04:11.948807 2271 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:04:12.190150 kubelet[2271]: E0307 02:04:12.173445 2271 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:12.249628 kubelet[2271]: E0307 02:04:12.248990 2271 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 02:04:17.925104 kubelet[2271]: I0307 02:04:17.916431 2271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.91641362 podStartE2EDuration="6.91641362s" podCreationTimestamp="2026-03-07 02:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:04:17.915937269 +0000 UTC m=+33.681188370" watchObservedRunningTime="2026-03-07 02:04:17.91641362 +0000 UTC m=+33.681664691" Mar 7 02:04:17.927820 kubelet[2271]: I0307 02:04:17.927548 2271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.92752644 podStartE2EDuration="6.92752644s" podCreationTimestamp="2026-03-07 02:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:04:17.822067426 +0000 UTC m=+33.587318528" watchObservedRunningTime="2026-03-07 02:04:17.92752644 +0000 UTC m=+33.692777532" Mar 7 02:04:23.033656 systemd[1]: Reloading requested from client PID 2561 ('systemctl') (unit session-7.scope)... Mar 7 02:04:23.033681 systemd[1]: Reloading... Mar 7 02:04:23.730845 zram_generator::config[2597]: No configuration found. Mar 7 02:04:24.238872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 02:04:24.420832 systemd[1]: Reloading finished in 1386 ms. Mar 7 02:04:24.548426 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:04:24.589093 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 02:04:24.589850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:04:24.592486 systemd[1]: kubelet.service: Consumed 8.322s CPU time, 131.6M memory peak, 0B memory swap peak. Mar 7 02:04:24.614003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 02:04:25.022496 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 02:04:25.033756 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 02:04:25.310840 kubelet[2644]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 02:04:25.341393 kubelet[2644]: I0307 02:04:25.340810 2644 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 02:04:25.341393 kubelet[2644]: I0307 02:04:25.340883 2644 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 02:04:25.341393 kubelet[2644]: I0307 02:04:25.340911 2644 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 02:04:25.341393 kubelet[2644]: I0307 02:04:25.340923 2644 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 02:04:25.345816 kubelet[2644]: I0307 02:04:25.344506 2644 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 02:04:25.347173 kubelet[2644]: I0307 02:04:25.346505 2644 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 02:04:25.357405 kubelet[2644]: I0307 02:04:25.353724 2644 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 02:04:25.373743 kubelet[2644]: E0307 02:04:25.373164 2644 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 02:04:25.373743 kubelet[2644]: I0307 02:04:25.373336 2644 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 02:04:25.390173 kubelet[2644]: I0307 02:04:25.389754 2644 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 02:04:25.390593 kubelet[2644]: I0307 02:04:25.390506 2644 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 02:04:25.391368 kubelet[2644]: I0307 02:04:25.390547 2644 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 02:04:25.391368 kubelet[2644]: I0307 02:04:25.390797 2644 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 02:04:25.391368 kubelet[2644]: I0307 02:04:25.390812 2644 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 02:04:25.391368 kubelet[2644]: I0307 02:04:25.390842 2644 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 02:04:25.391368 kubelet[2644]: I0307 02:04:25.391146 2644 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 02:04:25.391782 kubelet[2644]: I0307 02:04:25.391460 2644 kubelet.go:482] "Attempting to sync node with API server" Mar 7 02:04:25.391782 kubelet[2644]: I0307 02:04:25.391476 2644 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 02:04:25.391782 kubelet[2644]: I0307 02:04:25.391498 2644 kubelet.go:394] "Adding apiserver pod source" Mar 7 02:04:25.391782 kubelet[2644]: I0307 02:04:25.391510 2644 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 02:04:25.395331 kubelet[2644]: I0307 02:04:25.392873 2644 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 02:04:25.395331 kubelet[2644]: I0307 02:04:25.394367 2644 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 02:04:25.395331 kubelet[2644]: I0307 02:04:25.394407 2644 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 02:04:25.417315 kubelet[2644]: I0307 02:04:25.410936 2644 server.go:1257] "Started kubelet" Mar 7 02:04:25.417315 kubelet[2644]: I0307 02:04:25.413658 2644 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 02:04:25.428338 kubelet[2644]: I0307 02:04:25.421387 2644 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 02:04:25.428338 kubelet[2644]: I0307 02:04:25.424816 2644 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 02:04:25.428338 kubelet[2644]: I0307 02:04:25.424896 2644 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 02:04:25.434280 kubelet[2644]: I0307 02:04:25.431559 2644 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 02:04:25.434979 kubelet[2644]: I0307 02:04:25.434895 2644 server.go:317] "Adding debug handlers to kubelet server" Mar 7 02:04:25.445299 kubelet[2644]: I0307 02:04:25.443844 2644 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 02:04:25.453553 kubelet[2644]: I0307 02:04:25.453278 2644 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 02:04:25.456731 kubelet[2644]: I0307 02:04:25.454858 2644 factory.go:223] Registration of the systemd container factory successfully Mar 7 02:04:25.468404 kubelet[2644]: I0307 02:04:25.468375 2644 reconciler.go:29] "Reconciler: start to sync state" Mar 7 02:04:25.471188 kubelet[2644]: I0307 02:04:25.468708 2644 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 02:04:25.471188 kubelet[2644]: I0307 02:04:25.458837 2644 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 02:04:25.471188 kubelet[2644]: I0307 02:04:25.471280 2644 factory.go:223] Registration of the containerd container factory successfully Mar 7 02:04:25.558664 sudo[2673]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 02:04:25.559869 sudo[2673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 02:04:25.585110 kubelet[2644]: I0307 02:04:25.584894 2644 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 02:04:25.592379 kubelet[2644]: I0307 02:04:25.591360 2644 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 02:04:25.592379 kubelet[2644]: I0307 02:04:25.591564 2644 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 02:04:25.592379 kubelet[2644]: I0307 02:04:25.591973 2644 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 02:04:25.592379 kubelet[2644]: E0307 02:04:25.592041 2644 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 02:04:25.651031 kubelet[2644]: I0307 02:04:25.650999 2644 cpu_manager.go:225] "Starting" policy="none" Mar 7 02:04:25.651385 kubelet[2644]: I0307 02:04:25.651364 2644 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651459 2644 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651616 2644 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651632 2644 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651658 2644 policy_none.go:50] "Start" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651669 2644 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651686 2644 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651877 2644 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 02:04:25.652437 kubelet[2644]: I0307 02:04:25.651893 2644 policy_none.go:44] "Start" Mar 7 02:04:25.660989 kubelet[2644]: E0307 02:04:25.660957 2644 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 02:04:25.661485 kubelet[2644]: I0307 02:04:25.661468 2644 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 02:04:25.661623 kubelet[2644]: I0307 02:04:25.661574 2644 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 02:04:25.663108 kubelet[2644]: I0307 02:04:25.662846 2644 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 02:04:25.667472 kubelet[2644]: E0307 02:04:25.667444 2644 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 02:04:25.699377 kubelet[2644]: I0307 02:04:25.699342 2644 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.702027 kubelet[2644]: I0307 02:04:25.701766 2644 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 02:04:25.713108 kubelet[2644]: I0307 02:04:25.712007 2644 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 02:04:25.738499 kubelet[2644]: E0307 02:04:25.738451 2644 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 7 02:04:25.742579 kubelet[2644]: E0307 02:04:25.742140 2644 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.770372 kubelet[2644]: E0307 02:04:25.769172 2644 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 7 02:04:25.834369 kubelet[2644]: I0307 02:04:25.831528 2644 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 7 02:04:25.876751 kubelet[2644]: I0307 02:04:25.872482 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/63f1ad09446b4b4d17027aa63481ecc6-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"63f1ad09446b4b4d17027aa63481ecc6\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:04:25.876751 kubelet[2644]: I0307 02:04:25.872542 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/63f1ad09446b4b4d17027aa63481ecc6-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"63f1ad09446b4b4d17027aa63481ecc6\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:04:25.876751 kubelet[2644]: I0307 02:04:25.872593 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/63f1ad09446b4b4d17027aa63481ecc6-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"63f1ad09446b4b4d17027aa63481ecc6\") " pod="kube-system/kube-apiserver-localhost" Mar 7 02:04:25.876751 kubelet[2644]: I0307 02:04:25.872625 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.876751 kubelet[2644]: I0307 02:04:25.872655 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.877005 kubelet[2644]: I0307 02:04:25.872725 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 7 02:04:25.877005 kubelet[2644]: I0307 02:04:25.872747 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.877005 kubelet[2644]: I0307 02:04:25.872769 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.877005 kubelet[2644]: I0307 02:04:25.872793 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 02:04:25.924547 kubelet[2644]: I0307 02:04:25.924482 2644 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 7 02:04:25.924887 kubelet[2644]: I0307 02:04:25.924862 2644 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 7 02:04:26.041898 kubelet[2644]: E0307 02:04:26.040615 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:26.043758 kubelet[2644]: E0307 02:04:26.043601 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:26.071953 kubelet[2644]: E0307 02:04:26.071864 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:26.398146 kubelet[2644]: I0307 02:04:26.396544 2644 apiserver.go:52] "Watching apiserver" Mar 7 02:04:26.576934 kubelet[2644]: I0307 02:04:26.574414 2644 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 02:04:26.864497 kubelet[2644]: E0307 02:04:26.862793 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:26.889421 kubelet[2644]: E0307 02:04:26.886519 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:26.897502 kubelet[2644]: E0307 02:04:26.896819 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:27.068394 kubelet[2644]: I0307 02:04:27.068349 2644 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 02:04:27.069553 containerd[1473]: time="2026-03-07T02:04:27.069175198Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 02:04:27.080497 kubelet[2644]: I0307 02:04:27.076884 2644 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 02:04:27.947533 kubelet[2644]: E0307 02:04:27.945584 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:28.190532 kubelet[2644]: E0307 02:04:28.178052 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:30.211142 kubelet[2644]: E0307 02:04:30.210698 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:30.763935 kubelet[2644]: I0307 02:04:30.756649 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5e7dd19-2d52-45bc-bae9-c3707f9fa543-kube-proxy\") pod \"kube-proxy-gtjs4\" (UID: \"b5e7dd19-2d52-45bc-bae9-c3707f9fa543\") " pod="kube-system/kube-proxy-gtjs4" Mar 7 02:04:30.763935 kubelet[2644]: I0307 02:04:30.756740 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5e7dd19-2d52-45bc-bae9-c3707f9fa543-xtables-lock\") pod \"kube-proxy-gtjs4\" (UID: \"b5e7dd19-2d52-45bc-bae9-c3707f9fa543\") " pod="kube-system/kube-proxy-gtjs4" Mar 7 02:04:30.763935 kubelet[2644]: I0307 02:04:30.756981 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rk9v\" (UniqueName: \"kubernetes.io/projected/b5e7dd19-2d52-45bc-bae9-c3707f9fa543-kube-api-access-2rk9v\") pod \"kube-proxy-gtjs4\" (UID: \"b5e7dd19-2d52-45bc-bae9-c3707f9fa543\") " pod="kube-system/kube-proxy-gtjs4" Mar 7 02:04:30.763935 kubelet[2644]: I0307 02:04:30.757016 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5e7dd19-2d52-45bc-bae9-c3707f9fa543-lib-modules\") pod \"kube-proxy-gtjs4\" (UID: \"b5e7dd19-2d52-45bc-bae9-c3707f9fa543\") " pod="kube-system/kube-proxy-gtjs4" Mar 7 02:04:31.246172 systemd[1]: Created slice kubepods-besteffort-podb5e7dd19_2d52_45bc_bae9_c3707f9fa543.slice - libcontainer container kubepods-besteffort-podb5e7dd19_2d52_45bc_bae9_c3707f9fa543.slice. Mar 7 02:04:32.315356 kubelet[2644]: E0307 02:04:32.288608 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:32.377959 containerd[1473]: time="2026-03-07T02:04:32.289973794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtjs4,Uid:b5e7dd19-2d52-45bc-bae9-c3707f9fa543,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:33.771777 containerd[1473]: time="2026-03-07T02:04:33.771380166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:04:33.780586 containerd[1473]: time="2026-03-07T02:04:33.779862124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:04:33.780586 containerd[1473]: time="2026-03-07T02:04:33.779902610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:33.780586 containerd[1473]: time="2026-03-07T02:04:33.780408689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:33.975642 systemd[1]: Started cri-containerd-fc97e0934aedfbebaca264abf8be492244f11f6db244ed6ecc31bbd24e5022c1.scope - libcontainer container fc97e0934aedfbebaca264abf8be492244f11f6db244ed6ecc31bbd24e5022c1. Mar 7 02:04:34.058675 sudo[2673]: pam_unix(sudo:session): session closed for user root Mar 7 02:04:34.085878 containerd[1473]: time="2026-03-07T02:04:34.085121789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gtjs4,Uid:b5e7dd19-2d52-45bc-bae9-c3707f9fa543,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc97e0934aedfbebaca264abf8be492244f11f6db244ed6ecc31bbd24e5022c1\"" Mar 7 02:04:34.094362 kubelet[2644]: E0307 02:04:34.092500 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:34.124683 containerd[1473]: time="2026-03-07T02:04:34.122409129Z" level=info msg="CreateContainer within sandbox \"fc97e0934aedfbebaca264abf8be492244f11f6db244ed6ecc31bbd24e5022c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 02:04:34.258898 containerd[1473]: time="2026-03-07T02:04:34.258729776Z" level=info msg="CreateContainer within sandbox \"fc97e0934aedfbebaca264abf8be492244f11f6db244ed6ecc31bbd24e5022c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b9211e002f757cb18f0133ee32d47d772647bb7fe5a96bd2e1ce1b02721e268\"" Mar 7 02:04:34.262354 containerd[1473]: time="2026-03-07T02:04:34.262038164Z" level=info msg="StartContainer for \"6b9211e002f757cb18f0133ee32d47d772647bb7fe5a96bd2e1ce1b02721e268\"" Mar 7 02:04:34.375804 systemd[1]: Started cri-containerd-6b9211e002f757cb18f0133ee32d47d772647bb7fe5a96bd2e1ce1b02721e268.scope - libcontainer container 6b9211e002f757cb18f0133ee32d47d772647bb7fe5a96bd2e1ce1b02721e268. Mar 7 02:04:34.532999 containerd[1473]: time="2026-03-07T02:04:34.530279272Z" level=info msg="StartContainer for \"6b9211e002f757cb18f0133ee32d47d772647bb7fe5a96bd2e1ce1b02721e268\" returns successfully" Mar 7 02:04:35.461967 kubelet[2644]: E0307 02:04:35.460946 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:35.592553 kubelet[2644]: I0307 02:04:35.590299 2644 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gtjs4" podStartSLOduration=8.590141541 podStartE2EDuration="8.590141541s" podCreationTimestamp="2026-03-07 02:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:04:35.583778336 +0000 UTC m=+10.520868457" watchObservedRunningTime="2026-03-07 02:04:35.590141541 +0000 UTC m=+10.527231661" Mar 7 02:04:36.466038 kubelet[2644]: E0307 02:04:36.465541 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:38.098317 kubelet[2644]: I0307 02:04:38.095724 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-kernel\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.098317 kubelet[2644]: I0307 02:04:38.095800 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-bpf-maps\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.098317 kubelet[2644]: I0307 02:04:38.095831 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-etc-cni-netd\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.098317 kubelet[2644]: I0307 02:04:38.095857 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc4d3e66-626e-445c-8828-cb0a16044b6f-clustermesh-secrets\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.098317 kubelet[2644]: I0307 02:04:38.095884 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-hubble-tls\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110110 kubelet[2644]: I0307 02:04:38.095910 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdx2\" (UniqueName: \"kubernetes.io/projected/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-kube-api-access-pcdx2\") pod \"cilium-operator-78cf5644cb-q9jsx\" (UID: \"80da2ab2-00ba-4c2c-9275-84bf86c3ce95\") " pod="kube-system/cilium-operator-78cf5644cb-q9jsx" Mar 7 02:04:38.110110 kubelet[2644]: I0307 02:04:38.095930 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-cgroup\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110110 kubelet[2644]: I0307 02:04:38.095989 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cni-path\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110110 kubelet[2644]: I0307 02:04:38.096021 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-run\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110110 kubelet[2644]: I0307 02:04:38.096041 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-lib-modules\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110365 kubelet[2644]: I0307 02:04:38.096065 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-config-path\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110365 kubelet[2644]: I0307 02:04:38.096093 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpf4s\" (UniqueName: \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-kube-api-access-tpf4s\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110365 kubelet[2644]: I0307 02:04:38.096117 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-xtables-lock\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110365 kubelet[2644]: I0307 02:04:38.096142 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-hostproc\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.110365 kubelet[2644]: I0307 02:04:38.096161 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-net\") pod \"cilium-g72ws\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " pod="kube-system/cilium-g72ws" Mar 7 02:04:38.112377 kubelet[2644]: I0307 02:04:38.096180 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-cilium-config-path\") pod \"cilium-operator-78cf5644cb-q9jsx\" (UID: \"80da2ab2-00ba-4c2c-9275-84bf86c3ce95\") " pod="kube-system/cilium-operator-78cf5644cb-q9jsx" Mar 7 02:04:38.292635 systemd[1]: Created slice kubepods-besteffort-pod80da2ab2_00ba_4c2c_9275_84bf86c3ce95.slice - libcontainer container kubepods-besteffort-pod80da2ab2_00ba_4c2c_9275_84bf86c3ce95.slice. Mar 7 02:04:38.342848 kubelet[2644]: E0307 02:04:38.340707 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:38.386101 containerd[1473]: time="2026-03-07T02:04:38.343845647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-q9jsx,Uid:80da2ab2-00ba-4c2c-9275-84bf86c3ce95,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:38.381872 systemd[1]: Created slice kubepods-burstable-podbc4d3e66_626e_445c_8828_cb0a16044b6f.slice - libcontainer container kubepods-burstable-podbc4d3e66_626e_445c_8828_cb0a16044b6f.slice. Mar 7 02:04:38.569059 kubelet[2644]: E0307 02:04:38.565697 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:38.579323 containerd[1473]: time="2026-03-07T02:04:38.573151088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g72ws,Uid:bc4d3e66-626e-445c-8828-cb0a16044b6f,Namespace:kube-system,Attempt:0,}" Mar 7 02:04:39.264823 containerd[1473]: time="2026-03-07T02:04:39.262835828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:04:39.264823 containerd[1473]: time="2026-03-07T02:04:39.262938193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:04:39.264823 containerd[1473]: time="2026-03-07T02:04:39.262961326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:39.264823 containerd[1473]: time="2026-03-07T02:04:39.263101102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:39.279913 containerd[1473]: time="2026-03-07T02:04:39.278408104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:04:39.293938 containerd[1473]: time="2026-03-07T02:04:39.279049180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:04:39.293938 containerd[1473]: time="2026-03-07T02:04:39.285698722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:39.293938 containerd[1473]: time="2026-03-07T02:04:39.289796191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:04:39.893984 systemd[1]: Started cri-containerd-1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685.scope - libcontainer container 1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685. Mar 7 02:04:39.905875 systemd[1]: Started cri-containerd-e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460.scope - libcontainer container e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460. Mar 7 02:04:40.136020 containerd[1473]: time="2026-03-07T02:04:40.135851580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g72ws,Uid:bc4d3e66-626e-445c-8828-cb0a16044b6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\"" Mar 7 02:04:40.140880 kubelet[2644]: E0307 02:04:40.140561 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:04:40.178612 containerd[1473]: time="2026-03-07T02:04:40.152432003Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 02:04:40.375895 containerd[1473]: time="2026-03-07T02:04:40.375665875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-q9jsx,Uid:80da2ab2-00ba-4c2c-9275-84bf86c3ce95,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460\"" Mar 7 02:04:40.381009 kubelet[2644]: E0307 02:04:40.377955 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:06.758789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348025335.mount: Deactivated successfully. Mar 7 02:05:24.110116 containerd[1473]: time="2026-03-07T02:05:24.109128974Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:05:24.115828 containerd[1473]: time="2026-03-07T02:05:24.115677200Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 02:05:24.117848 containerd[1473]: time="2026-03-07T02:05:24.117758968Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:05:24.127323 containerd[1473]: time="2026-03-07T02:05:24.121528628Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 43.968861889s" Mar 7 02:05:24.127323 containerd[1473]: time="2026-03-07T02:05:24.121768295Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 02:05:24.127323 containerd[1473]: time="2026-03-07T02:05:24.126904203Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 02:05:24.165012 containerd[1473]: time="2026-03-07T02:05:24.164769306Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 02:05:24.205616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457175073.mount: Deactivated successfully. Mar 7 02:05:24.214675 containerd[1473]: time="2026-03-07T02:05:24.214578430Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\"" Mar 7 02:05:24.216306 containerd[1473]: time="2026-03-07T02:05:24.215910850Z" level=info msg="StartContainer for \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\"" Mar 7 02:05:24.294920 systemd[1]: Started cri-containerd-2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325.scope - libcontainer container 2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325. Mar 7 02:05:24.359401 containerd[1473]: time="2026-03-07T02:05:24.358527785Z" level=info msg="StartContainer for \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\" returns successfully" Mar 7 02:05:24.405006 systemd[1]: cri-containerd-2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325.scope: Deactivated successfully. Mar 7 02:05:24.787879 kubelet[2644]: E0307 02:05:24.787831 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:24.946406 containerd[1473]: time="2026-03-07T02:05:24.944464921Z" level=info msg="shim disconnected" id=2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325 namespace=k8s.io Mar 7 02:05:24.946406 containerd[1473]: time="2026-03-07T02:05:24.945949510Z" level=warning msg="cleaning up after shim disconnected" id=2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325 namespace=k8s.io Mar 7 02:05:24.946406 containerd[1473]: time="2026-03-07T02:05:24.945971069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:25.228534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325-rootfs.mount: Deactivated successfully. Mar 7 02:05:25.559027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968138985.mount: Deactivated successfully. Mar 7 02:05:25.794577 kubelet[2644]: E0307 02:05:25.794530 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:25.817548 containerd[1473]: time="2026-03-07T02:05:25.816667981Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 02:05:25.922688 containerd[1473]: time="2026-03-07T02:05:25.922583452Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\"" Mar 7 02:05:25.923715 containerd[1473]: time="2026-03-07T02:05:25.923617499Z" level=info msg="StartContainer for \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\"" Mar 7 02:05:26.021409 systemd[1]: Started cri-containerd-595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf.scope - libcontainer container 595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf. Mar 7 02:05:26.193185 containerd[1473]: time="2026-03-07T02:05:26.192181318Z" level=info msg="StartContainer for \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\" returns successfully" Mar 7 02:05:26.244772 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 02:05:26.245313 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:05:26.245444 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:05:26.257992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 02:05:26.260959 systemd[1]: cri-containerd-595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf.scope: Deactivated successfully. Mar 7 02:05:26.324660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf-rootfs.mount: Deactivated successfully. Mar 7 02:05:26.361695 containerd[1473]: time="2026-03-07T02:05:26.361497536Z" level=info msg="shim disconnected" id=595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf namespace=k8s.io Mar 7 02:05:26.361695 containerd[1473]: time="2026-03-07T02:05:26.361565801Z" level=warning msg="cleaning up after shim disconnected" id=595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf namespace=k8s.io Mar 7 02:05:26.361695 containerd[1473]: time="2026-03-07T02:05:26.361577862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:26.384937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 02:05:26.803499 kubelet[2644]: E0307 02:05:26.803301 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:26.811736 containerd[1473]: time="2026-03-07T02:05:26.811382393Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 02:05:26.872255 containerd[1473]: time="2026-03-07T02:05:26.871739773Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\"" Mar 7 02:05:26.873046 containerd[1473]: time="2026-03-07T02:05:26.872752974Z" level=info msg="StartContainer for \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\"" Mar 7 02:05:26.946918 systemd[1]: Started cri-containerd-c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06.scope - libcontainer container c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06. Mar 7 02:05:27.031297 containerd[1473]: time="2026-03-07T02:05:27.029609579Z" level=info msg="StartContainer for \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\" returns successfully" Mar 7 02:05:27.039607 systemd[1]: cri-containerd-c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06.scope: Deactivated successfully. Mar 7 02:05:27.206545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06-rootfs.mount: Deactivated successfully. Mar 7 02:05:27.221580 containerd[1473]: time="2026-03-07T02:05:27.219734366Z" level=info msg="shim disconnected" id=c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06 namespace=k8s.io Mar 7 02:05:27.221580 containerd[1473]: time="2026-03-07T02:05:27.219803913Z" level=warning msg="cleaning up after shim disconnected" id=c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06 namespace=k8s.io Mar 7 02:05:27.221580 containerd[1473]: time="2026-03-07T02:05:27.219818360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:27.297571 containerd[1473]: time="2026-03-07T02:05:27.297173027Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:05:27.300443 containerd[1473]: time="2026-03-07T02:05:27.300171367Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 02:05:27.306154 containerd[1473]: time="2026-03-07T02:05:27.305837559Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 02:05:27.308713 containerd[1473]: time="2026-03-07T02:05:27.308558193Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.181610911s" Mar 7 02:05:27.308713 containerd[1473]: time="2026-03-07T02:05:27.308644922Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 02:05:27.322007 containerd[1473]: time="2026-03-07T02:05:27.321884846Z" level=info msg="CreateContainer within sandbox \"e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 02:05:27.373946 containerd[1473]: time="2026-03-07T02:05:27.373720433Z" level=info msg="CreateContainer within sandbox \"e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\"" Mar 7 02:05:27.376120 containerd[1473]: time="2026-03-07T02:05:27.375668726Z" level=info msg="StartContainer for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\"" Mar 7 02:05:27.479678 systemd[1]: Started cri-containerd-cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d.scope - libcontainer container cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d. Mar 7 02:05:27.559741 containerd[1473]: time="2026-03-07T02:05:27.559499715Z" level=info msg="StartContainer for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" returns successfully" Mar 7 02:05:27.829848 kubelet[2644]: E0307 02:05:27.829648 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:27.846557 kubelet[2644]: E0307 02:05:27.846396 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:27.847971 containerd[1473]: time="2026-03-07T02:05:27.847743511Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 02:05:27.929149 containerd[1473]: time="2026-03-07T02:05:27.928908930Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\"" Mar 7 02:05:27.934691 containerd[1473]: time="2026-03-07T02:05:27.934562694Z" level=info msg="StartContainer for \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\"" Mar 7 02:05:27.937781 kubelet[2644]: I0307 02:05:27.937622 2644 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-q9jsx" podStartSLOduration=4.008886025 podStartE2EDuration="50.937605184s" podCreationTimestamp="2026-03-07 02:04:37 +0000 UTC" firstStartedPulling="2026-03-07 02:04:40.382144524 +0000 UTC m=+15.319234615" lastFinishedPulling="2026-03-07 02:05:27.310863674 +0000 UTC m=+62.247953774" observedRunningTime="2026-03-07 02:05:27.934300202 +0000 UTC m=+62.871390322" watchObservedRunningTime="2026-03-07 02:05:27.937605184 +0000 UTC m=+62.874695294" Mar 7 02:05:28.028578 systemd[1]: Started cri-containerd-f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e.scope - libcontainer container f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e. Mar 7 02:05:28.121782 systemd[1]: cri-containerd-f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e.scope: Deactivated successfully. Mar 7 02:05:28.133130 containerd[1473]: time="2026-03-07T02:05:28.132170977Z" level=info msg="StartContainer for \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\" returns successfully" Mar 7 02:05:28.188139 containerd[1473]: time="2026-03-07T02:05:28.187716032Z" level=info msg="shim disconnected" id=f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e namespace=k8s.io Mar 7 02:05:28.188139 containerd[1473]: time="2026-03-07T02:05:28.187806888Z" level=warning msg="cleaning up after shim disconnected" id=f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e namespace=k8s.io Mar 7 02:05:28.188139 containerd[1473]: time="2026-03-07T02:05:28.187820723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:05:28.858310 kubelet[2644]: E0307 02:05:28.856635 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:28.858310 kubelet[2644]: E0307 02:05:28.856741 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:28.888477 containerd[1473]: time="2026-03-07T02:05:28.887874014Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 02:05:28.939536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166419710.mount: Deactivated successfully. Mar 7 02:05:28.957041 containerd[1473]: time="2026-03-07T02:05:28.956783345Z" level=info msg="CreateContainer within sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\"" Mar 7 02:05:28.960494 containerd[1473]: time="2026-03-07T02:05:28.959156109Z" level=info msg="StartContainer for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\"" Mar 7 02:05:29.097675 systemd[1]: Started cri-containerd-f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410.scope - libcontainer container f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410. Mar 7 02:05:29.256415 containerd[1473]: time="2026-03-07T02:05:29.255872401Z" level=info msg="StartContainer for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" returns successfully" Mar 7 02:05:29.737078 kubelet[2644]: I0307 02:05:29.736547 2644 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 02:05:29.911507 kubelet[2644]: E0307 02:05:29.911469 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:29.946545 systemd[1]: Created slice kubepods-burstable-podc1482172_c714_4892_94df_10b7c529b71e.slice - libcontainer container kubepods-burstable-podc1482172_c714_4892_94df_10b7c529b71e.slice. Mar 7 02:05:30.006305 systemd[1]: Created slice kubepods-burstable-podc31963a9_b9cf_4c77_9295_6a6309f71424.slice - libcontainer container kubepods-burstable-podc31963a9_b9cf_4c77_9295_6a6309f71424.slice. Mar 7 02:05:30.030510 kubelet[2644]: I0307 02:05:30.028694 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1482172-c714-4892-94df-10b7c529b71e-config-volume\") pod \"coredns-7d764666f9-fmcm8\" (UID: \"c1482172-c714-4892-94df-10b7c529b71e\") " pod="kube-system/coredns-7d764666f9-fmcm8" Mar 7 02:05:30.030510 kubelet[2644]: I0307 02:05:30.028757 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c31963a9-b9cf-4c77-9295-6a6309f71424-config-volume\") pod \"coredns-7d764666f9-25sd6\" (UID: \"c31963a9-b9cf-4c77-9295-6a6309f71424\") " pod="kube-system/coredns-7d764666f9-25sd6" Mar 7 02:05:30.030510 kubelet[2644]: I0307 02:05:30.028798 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfdt\" (UniqueName: \"kubernetes.io/projected/c1482172-c714-4892-94df-10b7c529b71e-kube-api-access-jtfdt\") pod \"coredns-7d764666f9-fmcm8\" (UID: \"c1482172-c714-4892-94df-10b7c529b71e\") " pod="kube-system/coredns-7d764666f9-fmcm8" Mar 7 02:05:30.030510 kubelet[2644]: I0307 02:05:30.028833 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8z6n\" (UniqueName: \"kubernetes.io/projected/c31963a9-b9cf-4c77-9295-6a6309f71424-kube-api-access-m8z6n\") pod \"coredns-7d764666f9-25sd6\" (UID: \"c31963a9-b9cf-4c77-9295-6a6309f71424\") " pod="kube-system/coredns-7d764666f9-25sd6" Mar 7 02:05:30.081793 kubelet[2644]: I0307 02:05:30.077704 2644 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-g72ws" podStartSLOduration=4.36248441 podStartE2EDuration="53.077683398s" podCreationTimestamp="2026-03-07 02:04:37 +0000 UTC" firstStartedPulling="2026-03-07 02:04:40.145107662 +0000 UTC m=+15.082197752" lastFinishedPulling="2026-03-07 02:05:28.86030665 +0000 UTC m=+63.797396740" observedRunningTime="2026-03-07 02:05:30.066319008 +0000 UTC m=+65.003409108" watchObservedRunningTime="2026-03-07 02:05:30.077683398 +0000 UTC m=+65.014773488" Mar 7 02:05:30.323424 kubelet[2644]: E0307 02:05:30.314434 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:30.352637 kubelet[2644]: E0307 02:05:30.352150 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:30.407628 containerd[1473]: time="2026-03-07T02:05:30.407346485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-25sd6,Uid:c31963a9-b9cf-4c77-9295-6a6309f71424,Namespace:kube-system,Attempt:0,}" Mar 7 02:05:30.434652 containerd[1473]: time="2026-03-07T02:05:30.430672683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fmcm8,Uid:c1482172-c714-4892-94df-10b7c529b71e,Namespace:kube-system,Attempt:0,}" Mar 7 02:05:30.959885 kubelet[2644]: E0307 02:05:30.959847 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:31.963147 kubelet[2644]: E0307 02:05:31.962531 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:34.262674 systemd-networkd[1383]: cilium_host: Link UP Mar 7 02:05:34.265021 systemd-networkd[1383]: cilium_net: Link UP Mar 7 02:05:34.265529 systemd-networkd[1383]: cilium_net: Gained carrier Mar 7 02:05:34.275887 systemd-networkd[1383]: cilium_host: Gained carrier Mar 7 02:05:34.277567 systemd-networkd[1383]: cilium_net: Gained IPv6LL Mar 7 02:05:34.280448 systemd-networkd[1383]: cilium_host: Gained IPv6LL Mar 7 02:05:34.787275 systemd-networkd[1383]: cilium_vxlan: Link UP Mar 7 02:05:34.787311 systemd-networkd[1383]: cilium_vxlan: Gained carrier Mar 7 02:05:35.168782 systemd[1]: run-containerd-runc-k8s.io-f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410-runc.jaIGNt.mount: Deactivated successfully. Mar 7 02:05:35.458579 kernel: NET: Registered PF_ALG protocol family Mar 7 02:05:36.319449 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Mar 7 02:05:38.439168 systemd-networkd[1383]: lxc_health: Link UP Mar 7 02:05:38.518684 systemd-networkd[1383]: lxc_health: Gained carrier Mar 7 02:05:38.553497 kubelet[2644]: E0307 02:05:38.551656 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:38.959638 systemd-networkd[1383]: lxc463d9548e593: Link UP Mar 7 02:05:38.978271 kernel: eth0: renamed from tmp0e217 Mar 7 02:05:38.984132 systemd-networkd[1383]: lxc463d9548e593: Gained carrier Mar 7 02:05:39.014023 kubelet[2644]: E0307 02:05:39.012590 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:39.069409 systemd-networkd[1383]: lxc4ce03fa74e0e: Link UP Mar 7 02:05:39.077395 kernel: eth0: renamed from tmp224c3 Mar 7 02:05:39.094755 systemd-networkd[1383]: lxc4ce03fa74e0e: Gained carrier Mar 7 02:05:40.016904 kubelet[2644]: E0307 02:05:40.016308 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:40.162899 systemd-networkd[1383]: lxc_health: Gained IPv6LL Mar 7 02:05:40.411011 systemd-networkd[1383]: lxc463d9548e593: Gained IPv6LL Mar 7 02:05:40.593908 kubelet[2644]: E0307 02:05:40.593812 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:40.604417 systemd-networkd[1383]: lxc4ce03fa74e0e: Gained IPv6LL Mar 7 02:05:46.507652 containerd[1473]: time="2026-03-07T02:05:46.505284686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:05:46.507652 containerd[1473]: time="2026-03-07T02:05:46.505356639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:05:46.507652 containerd[1473]: time="2026-03-07T02:05:46.505466391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:05:46.507652 containerd[1473]: time="2026-03-07T02:05:46.505622148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:05:46.509021 containerd[1473]: time="2026-03-07T02:05:46.507984325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:05:46.509021 containerd[1473]: time="2026-03-07T02:05:46.508093395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:05:46.509021 containerd[1473]: time="2026-03-07T02:05:46.508119033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:05:46.510115 containerd[1473]: time="2026-03-07T02:05:46.509929993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:05:46.555591 systemd[1]: Started cri-containerd-0e217ad3699395979899392c7f5e9aea1ba83a2b157c431529028333a81560d5.scope - libcontainer container 0e217ad3699395979899392c7f5e9aea1ba83a2b157c431529028333a81560d5. Mar 7 02:05:46.603333 systemd[1]: Started cri-containerd-224c38280e8872a29361611f08d71dd9402bafc73884375eb74f80cb5cc9e2d9.scope - libcontainer container 224c38280e8872a29361611f08d71dd9402bafc73884375eb74f80cb5cc9e2d9. Mar 7 02:05:46.615852 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:05:46.620967 sudo[1640]: pam_unix(sudo:session): session closed for user root Mar 7 02:05:46.627708 sshd[1637]: pam_unix(sshd:session): session closed for user core Mar 7 02:05:46.639706 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:34586.service: Deactivated successfully. Mar 7 02:05:46.641418 systemd-resolved[1385]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 02:05:46.644825 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 02:05:46.647877 systemd[1]: session-7.scope: Consumed 27.283s CPU time, 164.1M memory peak, 0B memory swap peak. Mar 7 02:05:46.654445 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Mar 7 02:05:46.657955 systemd-logind[1461]: Removed session 7. Mar 7 02:05:46.689626 containerd[1473]: time="2026-03-07T02:05:46.689413418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fmcm8,Uid:c1482172-c714-4892-94df-10b7c529b71e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e217ad3699395979899392c7f5e9aea1ba83a2b157c431529028333a81560d5\"" Mar 7 02:05:46.692459 kubelet[2644]: E0307 02:05:46.692421 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:46.712016 containerd[1473]: time="2026-03-07T02:05:46.708767218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-25sd6,Uid:c31963a9-b9cf-4c77-9295-6a6309f71424,Namespace:kube-system,Attempt:0,} returns sandbox id \"224c38280e8872a29361611f08d71dd9402bafc73884375eb74f80cb5cc9e2d9\"" Mar 7 02:05:46.712300 kubelet[2644]: E0307 02:05:46.710721 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:46.725140 containerd[1473]: time="2026-03-07T02:05:46.724782971Z" level=info msg="CreateContainer within sandbox \"0e217ad3699395979899392c7f5e9aea1ba83a2b157c431529028333a81560d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:05:46.730024 containerd[1473]: time="2026-03-07T02:05:46.728814505Z" level=info msg="CreateContainer within sandbox \"224c38280e8872a29361611f08d71dd9402bafc73884375eb74f80cb5cc9e2d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 02:05:46.777477 containerd[1473]: time="2026-03-07T02:05:46.776440569Z" level=info msg="CreateContainer within sandbox \"0e217ad3699395979899392c7f5e9aea1ba83a2b157c431529028333a81560d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0b24398469d5288f05969fbd122ef5127f578f9a45f816b7d055ef4e5f2f546\"" Mar 7 02:05:46.777760 containerd[1473]: time="2026-03-07T02:05:46.777727803Z" level=info msg="StartContainer for \"b0b24398469d5288f05969fbd122ef5127f578f9a45f816b7d055ef4e5f2f546\"" Mar 7 02:05:46.788402 containerd[1473]: time="2026-03-07T02:05:46.788319153Z" level=info msg="CreateContainer within sandbox \"224c38280e8872a29361611f08d71dd9402bafc73884375eb74f80cb5cc9e2d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51839510e78c4dac2f9e22c21d6332e3a6c87ae7e002c1312ea1a559bb077ef4\"" Mar 7 02:05:46.792333 containerd[1473]: time="2026-03-07T02:05:46.792079476Z" level=info msg="StartContainer for \"51839510e78c4dac2f9e22c21d6332e3a6c87ae7e002c1312ea1a559bb077ef4\"" Mar 7 02:05:46.835684 systemd[1]: Started cri-containerd-b0b24398469d5288f05969fbd122ef5127f578f9a45f816b7d055ef4e5f2f546.scope - libcontainer container b0b24398469d5288f05969fbd122ef5127f578f9a45f816b7d055ef4e5f2f546. Mar 7 02:05:46.847497 systemd[1]: Started cri-containerd-51839510e78c4dac2f9e22c21d6332e3a6c87ae7e002c1312ea1a559bb077ef4.scope - libcontainer container 51839510e78c4dac2f9e22c21d6332e3a6c87ae7e002c1312ea1a559bb077ef4. Mar 7 02:05:46.901342 containerd[1473]: time="2026-03-07T02:05:46.900467257Z" level=info msg="StartContainer for \"b0b24398469d5288f05969fbd122ef5127f578f9a45f816b7d055ef4e5f2f546\" returns successfully" Mar 7 02:05:46.922836 containerd[1473]: time="2026-03-07T02:05:46.921890130Z" level=info msg="StartContainer for \"51839510e78c4dac2f9e22c21d6332e3a6c87ae7e002c1312ea1a559bb077ef4\" returns successfully" Mar 7 02:05:47.060928 kubelet[2644]: E0307 02:05:47.058784 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:47.072937 kubelet[2644]: E0307 02:05:47.072851 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:47.121887 kubelet[2644]: I0307 02:05:47.121752 2644 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-25sd6" podStartSLOduration=80.115793388 podStartE2EDuration="1m20.115793388s" podCreationTimestamp="2026-03-07 02:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:05:47.100336107 +0000 UTC m=+82.037426197" watchObservedRunningTime="2026-03-07 02:05:47.115793388 +0000 UTC m=+82.052883478" Mar 7 02:05:47.156045 kubelet[2644]: I0307 02:05:47.155678 2644 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-fmcm8" podStartSLOduration=80.155599324 podStartE2EDuration="1m20.155599324s" podCreationTimestamp="2026-03-07 02:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:05:47.153910989 +0000 UTC m=+82.091001079" watchObservedRunningTime="2026-03-07 02:05:47.155599324 +0000 UTC m=+82.092689414" Mar 7 02:05:47.517461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758582224.mount: Deactivated successfully. Mar 7 02:05:48.074829 kubelet[2644]: E0307 02:05:48.074759 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:48.076447 kubelet[2644]: E0307 02:05:48.075818 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:49.082002 kubelet[2644]: E0307 02:05:49.081846 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:49.082002 kubelet[2644]: E0307 02:05:49.081976 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:51.612360 kubelet[2644]: E0307 02:05:51.611640 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:54.593810 kubelet[2644]: E0307 02:05:54.592882 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:05:57.735855 kubelet[2644]: E0307 02:05:57.733028 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:12.488505 kubelet[2644]: E0307 02:06:12.483329 2644 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.428s" Mar 7 02:06:25.723605 update_engine[1464]: I20260307 02:06:25.723376 1464 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 02:06:25.723605 update_engine[1464]: I20260307 02:06:25.723551 1464 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.725178 1464 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.727018 1464 omaha_request_params.cc:62] Current group set to lts Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.729770 1464 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.729794 1464 update_attempter.cc:643] Scheduling an action processor start. Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.729820 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.729863 1464 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.730005 1464 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.730022 1464 omaha_request_action.cc:272] Request: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: Mar 7 02:06:25.730077 update_engine[1464]: I20260307 02:06:25.730034 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:06:25.731669 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 02:06:25.741853 update_engine[1464]: I20260307 02:06:25.741689 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:06:25.743091 update_engine[1464]: I20260307 02:06:25.742831 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:06:25.760915 update_engine[1464]: E20260307 02:06:25.760725 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:06:25.761058 update_engine[1464]: I20260307 02:06:25.760949 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 02:06:31.965000 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:59870.service - OpenSSH per-connection server daemon (10.0.0.1:59870). Mar 7 02:06:32.088953 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 59870 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:32.092500 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:32.108054 systemd-logind[1461]: New session 8 of user core. Mar 7 02:06:32.117997 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 02:06:32.398444 sshd[4216]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:32.407041 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:59870.service: Deactivated successfully. Mar 7 02:06:32.410809 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 02:06:32.417550 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Mar 7 02:06:32.419934 systemd-logind[1461]: Removed session 8. Mar 7 02:06:35.720658 update_engine[1464]: I20260307 02:06:35.719395 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:06:35.720658 update_engine[1464]: I20260307 02:06:35.719862 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:06:35.720658 update_engine[1464]: I20260307 02:06:35.720355 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:06:35.738758 update_engine[1464]: E20260307 02:06:35.738404 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:06:35.738758 update_engine[1464]: I20260307 02:06:35.738560 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 02:06:37.449828 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:59906.service - OpenSSH per-connection server daemon (10.0.0.1:59906). Mar 7 02:06:37.548188 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 59906 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:37.559190 sshd[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:37.594621 systemd-logind[1461]: New session 9 of user core. Mar 7 02:06:37.621369 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 02:06:37.919773 sshd[4239]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:37.925689 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:59906.service: Deactivated successfully. Mar 7 02:06:37.932606 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 02:06:37.938678 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Mar 7 02:06:37.941688 systemd-logind[1461]: Removed session 9. Mar 7 02:06:42.971079 systemd[1]: Started sshd@9-10.0.0.146:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). Mar 7 02:06:43.076701 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:43.079287 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:43.092125 systemd-logind[1461]: New session 10 of user core. Mar 7 02:06:43.139850 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 02:06:43.434455 sshd[4254]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:43.445820 systemd[1]: sshd@9-10.0.0.146:22-10.0.0.1:42704.service: Deactivated successfully. Mar 7 02:06:43.448771 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 02:06:43.451694 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Mar 7 02:06:43.457005 systemd-logind[1461]: Removed session 10. Mar 7 02:06:45.722639 update_engine[1464]: I20260307 02:06:45.722237 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:06:45.725746 update_engine[1464]: I20260307 02:06:45.722909 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:06:45.725746 update_engine[1464]: I20260307 02:06:45.723537 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:06:45.741719 update_engine[1464]: E20260307 02:06:45.740707 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:06:45.741719 update_engine[1464]: I20260307 02:06:45.741178 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 02:06:48.480595 systemd[1]: Started sshd@10-10.0.0.146:22-10.0.0.1:42718.service - OpenSSH per-connection server daemon (10.0.0.1:42718). Mar 7 02:06:48.563476 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 42718 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:48.567277 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:48.582296 systemd-logind[1461]: New session 11 of user core. Mar 7 02:06:48.607308 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 02:06:48.916159 sshd[4270]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:48.929603 systemd[1]: sshd@10-10.0.0.146:22-10.0.0.1:42718.service: Deactivated successfully. Mar 7 02:06:48.932530 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 02:06:48.936023 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Mar 7 02:06:48.943117 systemd-logind[1461]: Removed session 11. Mar 7 02:06:49.598541 kubelet[2644]: E0307 02:06:49.596177 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:51.595840 kubelet[2644]: E0307 02:06:51.594063 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:53.951781 systemd[1]: Started sshd@11-10.0.0.146:22-10.0.0.1:53488.service - OpenSSH per-connection server daemon (10.0.0.1:53488). Mar 7 02:06:54.012483 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 53488 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:54.017651 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:54.028082 systemd-logind[1461]: New session 12 of user core. Mar 7 02:06:54.044987 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 02:06:54.298451 sshd[4287]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:54.308971 systemd[1]: sshd@11-10.0.0.146:22-10.0.0.1:53488.service: Deactivated successfully. Mar 7 02:06:54.313171 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 02:06:54.315050 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Mar 7 02:06:54.318597 systemd-logind[1461]: Removed session 12. Mar 7 02:06:55.718656 update_engine[1464]: I20260307 02:06:55.718524 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:06:55.720591 update_engine[1464]: I20260307 02:06:55.719030 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:06:55.720591 update_engine[1464]: I20260307 02:06:55.719520 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:06:55.739525 update_engine[1464]: E20260307 02:06:55.737759 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.737919 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.737943 1464 omaha_request_action.cc:617] Omaha request response: Mar 7 02:06:55.739525 update_engine[1464]: E20260307 02:06:55.738133 1464 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738170 1464 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738184 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738325 1464 update_attempter.cc:306] Processing Done. Mar 7 02:06:55.739525 update_engine[1464]: E20260307 02:06:55.738358 1464 update_attempter.cc:619] Update failed. Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738373 1464 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738385 1464 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738399 1464 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738559 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738600 1464 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 02:06:55.739525 update_engine[1464]: I20260307 02:06:55.738610 1464 omaha_request_action.cc:272] Request: Mar 7 02:06:55.739525 update_engine[1464]: Mar 7 02:06:55.739525 update_engine[1464]: Mar 7 02:06:55.741137 update_engine[1464]: Mar 7 02:06:55.741137 update_engine[1464]: Mar 7 02:06:55.741137 update_engine[1464]: Mar 7 02:06:55.741137 update_engine[1464]: Mar 7 02:06:55.741137 update_engine[1464]: I20260307 02:06:55.738622 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 02:06:55.741137 update_engine[1464]: I20260307 02:06:55.738978 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 02:06:55.741137 update_engine[1464]: I20260307 02:06:55.739318 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 02:06:55.741947 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 02:06:55.761461 update_engine[1464]: E20260307 02:06:55.759989 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 02:06:55.761461 update_engine[1464]: I20260307 02:06:55.760778 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 02:06:55.762308 update_engine[1464]: I20260307 02:06:55.761719 1464 omaha_request_action.cc:617] Omaha request response: Mar 7 02:06:55.762308 update_engine[1464]: I20260307 02:06:55.761757 1464 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:06:55.762308 update_engine[1464]: I20260307 02:06:55.761770 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 02:06:55.762308 update_engine[1464]: I20260307 02:06:55.761782 1464 update_attempter.cc:306] Processing Done. Mar 7 02:06:55.762308 update_engine[1464]: I20260307 02:06:55.761795 1464 update_attempter.cc:310] Error event sent. Mar 7 02:06:55.762308 update_engine[1464]: I20260307 02:06:55.761816 1464 update_check_scheduler.cc:74] Next update check in 49m28s Mar 7 02:06:55.763866 locksmithd[1505]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 02:06:58.593585 kubelet[2644]: E0307 02:06:58.593370 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:06:59.356293 systemd[1]: Started sshd@12-10.0.0.146:22-10.0.0.1:53498.service - OpenSSH per-connection server daemon (10.0.0.1:53498). Mar 7 02:06:59.498913 sshd[4303]: Accepted publickey for core from 10.0.0.1 port 53498 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:06:59.513438 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:06:59.537433 systemd-logind[1461]: New session 13 of user core. Mar 7 02:06:59.548987 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 02:06:59.783619 sshd[4303]: pam_unix(sshd:session): session closed for user core Mar 7 02:06:59.792647 systemd[1]: sshd@12-10.0.0.146:22-10.0.0.1:53498.service: Deactivated successfully. Mar 7 02:06:59.798556 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 02:06:59.811181 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Mar 7 02:06:59.819972 systemd-logind[1461]: Removed session 13. Mar 7 02:07:04.803596 systemd[1]: Started sshd@13-10.0.0.146:22-10.0.0.1:36758.service - OpenSSH per-connection server daemon (10.0.0.1:36758). Mar 7 02:07:04.866754 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 36758 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:04.870045 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:04.879976 systemd-logind[1461]: New session 14 of user core. Mar 7 02:07:04.893642 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 02:07:05.068116 sshd[4319]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:05.079589 systemd[1]: sshd@13-10.0.0.146:22-10.0.0.1:36758.service: Deactivated successfully. Mar 7 02:07:05.083169 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 02:07:05.086995 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Mar 7 02:07:05.096452 systemd[1]: Started sshd@14-10.0.0.146:22-10.0.0.1:36766.service - OpenSSH per-connection server daemon (10.0.0.1:36766). Mar 7 02:07:05.098575 systemd-logind[1461]: Removed session 14. Mar 7 02:07:05.162895 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 36766 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:05.166186 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:05.179784 systemd-logind[1461]: New session 15 of user core. Mar 7 02:07:05.192652 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 02:07:05.472924 sshd[4334]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:05.486750 systemd[1]: sshd@14-10.0.0.146:22-10.0.0.1:36766.service: Deactivated successfully. Mar 7 02:07:05.493099 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 02:07:05.498010 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Mar 7 02:07:05.515406 systemd[1]: Started sshd@15-10.0.0.146:22-10.0.0.1:36844.service - OpenSSH per-connection server daemon (10.0.0.1:36844). Mar 7 02:07:05.519651 systemd-logind[1461]: Removed session 15. Mar 7 02:07:05.572139 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 36844 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:05.575050 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:05.585755 systemd-logind[1461]: New session 16 of user core. Mar 7 02:07:05.596968 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 02:07:05.793393 sshd[4348]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:05.799818 systemd[1]: sshd@15-10.0.0.146:22-10.0.0.1:36844.service: Deactivated successfully. Mar 7 02:07:05.803605 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 02:07:05.804825 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Mar 7 02:07:05.806943 systemd-logind[1461]: Removed session 16. Mar 7 02:07:06.594026 kubelet[2644]: E0307 02:07:06.593910 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:09.597729 kubelet[2644]: E0307 02:07:09.595770 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:09.597729 kubelet[2644]: E0307 02:07:09.596932 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:10.594841 kubelet[2644]: E0307 02:07:10.594695 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:10.829455 systemd[1]: Started sshd@16-10.0.0.146:22-10.0.0.1:46476.service - OpenSSH per-connection server daemon (10.0.0.1:46476). Mar 7 02:07:10.903340 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 46476 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:10.907985 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:10.924052 systemd-logind[1461]: New session 17 of user core. Mar 7 02:07:10.935086 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 02:07:11.143817 sshd[4367]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:11.152569 systemd[1]: sshd@16-10.0.0.146:22-10.0.0.1:46476.service: Deactivated successfully. Mar 7 02:07:11.156480 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 02:07:11.158596 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Mar 7 02:07:11.161528 systemd-logind[1461]: Removed session 17. Mar 7 02:07:15.597856 kubelet[2644]: E0307 02:07:15.594572 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:16.195508 systemd[1]: Started sshd@17-10.0.0.146:22-10.0.0.1:46492.service - OpenSSH per-connection server daemon (10.0.0.1:46492). Mar 7 02:07:16.277759 sshd[4381]: Accepted publickey for core from 10.0.0.1 port 46492 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:16.281013 sshd[4381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:16.296556 systemd-logind[1461]: New session 18 of user core. Mar 7 02:07:16.317444 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 02:07:16.544573 sshd[4381]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:16.550962 systemd[1]: sshd@17-10.0.0.146:22-10.0.0.1:46492.service: Deactivated successfully. Mar 7 02:07:16.555105 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 02:07:16.559782 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Mar 7 02:07:16.564724 systemd-logind[1461]: Removed session 18. Mar 7 02:07:21.569690 systemd[1]: Started sshd@18-10.0.0.146:22-10.0.0.1:34148.service - OpenSSH per-connection server daemon (10.0.0.1:34148). Mar 7 02:07:21.609504 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 34148 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:21.612354 sshd[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:21.619402 systemd-logind[1461]: New session 19 of user core. Mar 7 02:07:21.622507 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 02:07:21.784860 sshd[4395]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:21.798303 systemd[1]: sshd@18-10.0.0.146:22-10.0.0.1:34148.service: Deactivated successfully. Mar 7 02:07:21.801398 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 02:07:21.804851 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Mar 7 02:07:21.816102 systemd[1]: Started sshd@19-10.0.0.146:22-10.0.0.1:34150.service - OpenSSH per-connection server daemon (10.0.0.1:34150). Mar 7 02:07:21.818052 systemd-logind[1461]: Removed session 19. Mar 7 02:07:21.861172 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 34150 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:21.863623 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:21.871710 systemd-logind[1461]: New session 20 of user core. Mar 7 02:07:21.883508 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 02:07:22.307977 sshd[4409]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:22.321956 systemd[1]: sshd@19-10.0.0.146:22-10.0.0.1:34150.service: Deactivated successfully. Mar 7 02:07:22.325118 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 02:07:22.328183 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. Mar 7 02:07:22.337984 systemd[1]: Started sshd@20-10.0.0.146:22-10.0.0.1:34152.service - OpenSSH per-connection server daemon (10.0.0.1:34152). Mar 7 02:07:22.340432 systemd-logind[1461]: Removed session 20. Mar 7 02:07:22.418601 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 34152 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:22.421662 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:22.432460 systemd-logind[1461]: New session 21 of user core. Mar 7 02:07:22.442876 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 02:07:23.284444 sshd[4423]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:23.293067 systemd[1]: sshd@20-10.0.0.146:22-10.0.0.1:34152.service: Deactivated successfully. Mar 7 02:07:23.296473 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 02:07:23.299416 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. Mar 7 02:07:23.308119 systemd[1]: Started sshd@21-10.0.0.146:22-10.0.0.1:34166.service - OpenSSH per-connection server daemon (10.0.0.1:34166). Mar 7 02:07:23.310489 systemd-logind[1461]: Removed session 21. Mar 7 02:07:23.367005 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 34166 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:23.370873 sshd[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:23.381442 systemd-logind[1461]: New session 22 of user core. Mar 7 02:07:23.401544 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 02:07:23.746526 sshd[4443]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:23.760350 systemd[1]: sshd@21-10.0.0.146:22-10.0.0.1:34166.service: Deactivated successfully. Mar 7 02:07:23.766630 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 02:07:23.774092 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. Mar 7 02:07:23.786532 systemd[1]: Started sshd@22-10.0.0.146:22-10.0.0.1:34170.service - OpenSSH per-connection server daemon (10.0.0.1:34170). Mar 7 02:07:23.788520 systemd-logind[1461]: Removed session 22. Mar 7 02:07:23.834885 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 34170 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:23.840383 sshd[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:23.851734 systemd-logind[1461]: New session 23 of user core. Mar 7 02:07:23.860614 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 02:07:24.013158 sshd[4456]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:24.018062 systemd[1]: sshd@22-10.0.0.146:22-10.0.0.1:34170.service: Deactivated successfully. Mar 7 02:07:24.022185 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 02:07:24.024596 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit. Mar 7 02:07:24.026630 systemd-logind[1461]: Removed session 23. Mar 7 02:07:29.030047 systemd[1]: Started sshd@23-10.0.0.146:22-10.0.0.1:34214.service - OpenSSH per-connection server daemon (10.0.0.1:34214). Mar 7 02:07:29.108523 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 34214 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:29.112297 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:29.128042 systemd-logind[1461]: New session 24 of user core. Mar 7 02:07:29.144617 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 02:07:29.317797 sshd[4474]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:29.326452 systemd[1]: sshd@23-10.0.0.146:22-10.0.0.1:34214.service: Deactivated successfully. Mar 7 02:07:29.332475 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 02:07:29.334065 systemd-logind[1461]: Session 24 logged out. Waiting for processes to exit. Mar 7 02:07:29.336616 systemd-logind[1461]: Removed session 24. Mar 7 02:07:34.335514 systemd[1]: Started sshd@24-10.0.0.146:22-10.0.0.1:52182.service - OpenSSH per-connection server daemon (10.0.0.1:52182). Mar 7 02:07:34.386518 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 52182 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:34.389627 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:34.398552 systemd-logind[1461]: New session 25 of user core. Mar 7 02:07:34.408638 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 02:07:34.556081 sshd[4490]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:34.563985 systemd[1]: sshd@24-10.0.0.146:22-10.0.0.1:52182.service: Deactivated successfully. Mar 7 02:07:34.610067 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 02:07:34.642456 systemd-logind[1461]: Session 25 logged out. Waiting for processes to exit. Mar 7 02:07:34.660427 systemd-logind[1461]: Removed session 25. Mar 7 02:07:39.571731 systemd[1]: Started sshd@25-10.0.0.146:22-10.0.0.1:52184.service - OpenSSH per-connection server daemon (10.0.0.1:52184). Mar 7 02:07:39.634049 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 52184 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:39.636399 sshd[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:39.642784 systemd-logind[1461]: New session 26 of user core. Mar 7 02:07:39.654492 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 02:07:39.778359 sshd[4507]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:39.791417 systemd[1]: sshd@25-10.0.0.146:22-10.0.0.1:52184.service: Deactivated successfully. Mar 7 02:07:39.794014 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 02:07:39.795936 systemd-logind[1461]: Session 26 logged out. Waiting for processes to exit. Mar 7 02:07:39.807866 systemd[1]: Started sshd@26-10.0.0.146:22-10.0.0.1:52190.service - OpenSSH per-connection server daemon (10.0.0.1:52190). Mar 7 02:07:39.809538 systemd-logind[1461]: Removed session 26. Mar 7 02:07:39.848804 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 52190 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:39.850888 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:39.857240 systemd-logind[1461]: New session 27 of user core. Mar 7 02:07:39.866436 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 02:07:41.288717 containerd[1473]: time="2026-03-07T02:07:41.288588664Z" level=info msg="StopContainer for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" with timeout 30 (s)" Mar 7 02:07:41.289597 containerd[1473]: time="2026-03-07T02:07:41.289341411Z" level=info msg="Stop container \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" with signal terminated" Mar 7 02:07:41.342531 systemd[1]: cri-containerd-cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d.scope: Deactivated successfully. Mar 7 02:07:41.343179 systemd[1]: cri-containerd-cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d.scope: Consumed 2.142s CPU time. Mar 7 02:07:41.343798 containerd[1473]: time="2026-03-07T02:07:41.343726799Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 02:07:41.352922 containerd[1473]: time="2026-03-07T02:07:41.352884500Z" level=info msg="StopContainer for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" with timeout 2 (s)" Mar 7 02:07:41.353537 containerd[1473]: time="2026-03-07T02:07:41.353410557Z" level=info msg="Stop container \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" with signal terminated" Mar 7 02:07:41.367469 systemd-networkd[1383]: lxc_health: Link DOWN Mar 7 02:07:41.367495 systemd-networkd[1383]: lxc_health: Lost carrier Mar 7 02:07:41.387921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d-rootfs.mount: Deactivated successfully. Mar 7 02:07:41.400572 containerd[1473]: time="2026-03-07T02:07:41.400427971Z" level=info msg="shim disconnected" id=cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d namespace=k8s.io Mar 7 02:07:41.400572 containerd[1473]: time="2026-03-07T02:07:41.400561432Z" level=warning msg="cleaning up after shim disconnected" id=cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d namespace=k8s.io Mar 7 02:07:41.400572 containerd[1473]: time="2026-03-07T02:07:41.400574227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:41.406065 systemd[1]: cri-containerd-f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410.scope: Deactivated successfully. Mar 7 02:07:41.406571 systemd[1]: cri-containerd-f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410.scope: Consumed 21.806s CPU time. Mar 7 02:07:41.447370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410-rootfs.mount: Deactivated successfully. Mar 7 02:07:41.458252 containerd[1473]: time="2026-03-07T02:07:41.457975561Z" level=info msg="shim disconnected" id=f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410 namespace=k8s.io Mar 7 02:07:41.458886 containerd[1473]: time="2026-03-07T02:07:41.458106377Z" level=warning msg="cleaning up after shim disconnected" id=f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410 namespace=k8s.io Mar 7 02:07:41.458886 containerd[1473]: time="2026-03-07T02:07:41.458401894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:41.462737 containerd[1473]: time="2026-03-07T02:07:41.462590031Z" level=info msg="StopContainer for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" returns successfully" Mar 7 02:07:41.463882 containerd[1473]: time="2026-03-07T02:07:41.463829847Z" level=info msg="StopPodSandbox for \"e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460\"" Mar 7 02:07:41.463882 containerd[1473]: time="2026-03-07T02:07:41.463867638Z" level=info msg="Container to stop \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:07:41.468062 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460-shm.mount: Deactivated successfully. Mar 7 02:07:41.480646 systemd[1]: cri-containerd-e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460.scope: Deactivated successfully. Mar 7 02:07:41.502909 containerd[1473]: time="2026-03-07T02:07:41.502860699Z" level=info msg="StopContainer for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" returns successfully" Mar 7 02:07:41.505629 containerd[1473]: time="2026-03-07T02:07:41.505292248Z" level=info msg="StopPodSandbox for \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\"" Mar 7 02:07:41.505713 containerd[1473]: time="2026-03-07T02:07:41.505677002Z" level=info msg="Container to stop \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:07:41.505768 containerd[1473]: time="2026-03-07T02:07:41.505713220Z" level=info msg="Container to stop \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:07:41.505768 containerd[1473]: time="2026-03-07T02:07:41.505732968Z" level=info msg="Container to stop \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:07:41.505768 containerd[1473]: time="2026-03-07T02:07:41.505749470Z" level=info msg="Container to stop \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:07:41.505836 containerd[1473]: time="2026-03-07T02:07:41.505768355Z" level=info msg="Container to stop \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 02:07:41.522666 systemd[1]: cri-containerd-1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685.scope: Deactivated successfully. Mar 7 02:07:41.536602 containerd[1473]: time="2026-03-07T02:07:41.536537859Z" level=info msg="shim disconnected" id=e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460 namespace=k8s.io Mar 7 02:07:41.537416 containerd[1473]: time="2026-03-07T02:07:41.537107283Z" level=warning msg="cleaning up after shim disconnected" id=e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460 namespace=k8s.io Mar 7 02:07:41.537497 containerd[1473]: time="2026-03-07T02:07:41.537412647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:41.564775 containerd[1473]: time="2026-03-07T02:07:41.564182175Z" level=info msg="shim disconnected" id=1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685 namespace=k8s.io Mar 7 02:07:41.564775 containerd[1473]: time="2026-03-07T02:07:41.564316459Z" level=warning msg="cleaning up after shim disconnected" id=1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685 namespace=k8s.io Mar 7 02:07:41.564775 containerd[1473]: time="2026-03-07T02:07:41.564332760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:41.564775 containerd[1473]: time="2026-03-07T02:07:41.564552542Z" level=info msg="TearDown network for sandbox \"e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460\" successfully" Mar 7 02:07:41.564775 containerd[1473]: time="2026-03-07T02:07:41.564571037Z" level=info msg="StopPodSandbox for \"e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460\" returns successfully" Mar 7 02:07:41.604405 containerd[1473]: time="2026-03-07T02:07:41.604278554Z" level=info msg="TearDown network for sandbox \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" successfully" Mar 7 02:07:41.604405 containerd[1473]: time="2026-03-07T02:07:41.604331434Z" level=info msg="StopPodSandbox for \"1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685\" returns successfully" Mar 7 02:07:41.614880 kubelet[2644]: I0307 02:07:41.614596 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-kube-api-access-pcdx2\" (UniqueName: \"kubernetes.io/projected/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-kube-api-access-pcdx2\") pod \"80da2ab2-00ba-4c2c-9275-84bf86c3ce95\" (UID: \"80da2ab2-00ba-4c2c-9275-84bf86c3ce95\") " Mar 7 02:07:41.614880 kubelet[2644]: I0307 02:07:41.614746 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-cilium-config-path\") pod \"80da2ab2-00ba-4c2c-9275-84bf86c3ce95\" (UID: \"80da2ab2-00ba-4c2c-9275-84bf86c3ce95\") " Mar 7 02:07:41.620684 kubelet[2644]: I0307 02:07:41.620485 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-cilium-config-path" pod "80da2ab2-00ba-4c2c-9275-84bf86c3ce95" (UID: "80da2ab2-00ba-4c2c-9275-84bf86c3ce95"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:07:41.622075 kubelet[2644]: I0307 02:07:41.621868 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-kube-api-access-pcdx2" pod "80da2ab2-00ba-4c2c-9275-84bf86c3ce95" (UID: "80da2ab2-00ba-4c2c-9275-84bf86c3ce95"). InnerVolumeSpecName "kube-api-access-pcdx2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:07:41.715847 kubelet[2644]: I0307 02:07:41.715693 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-run\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.715847 kubelet[2644]: I0307 02:07:41.715786 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-cgroup\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.715847 kubelet[2644]: I0307 02:07:41.715797 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-run" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.715847 kubelet[2644]: I0307 02:07:41.715814 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-bpf-maps\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.715847 kubelet[2644]: I0307 02:07:41.715844 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-net\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.716256 kubelet[2644]: I0307 02:07:41.715865 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-cgroup" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.716256 kubelet[2644]: I0307 02:07:41.715876 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-bpf-maps" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.716256 kubelet[2644]: I0307 02:07:41.715873 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-lib-modules\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.716256 kubelet[2644]: I0307 02:07:41.715895 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-net" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.716256 kubelet[2644]: I0307 02:07:41.715902 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-lib-modules" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.716401 kubelet[2644]: I0307 02:07:41.715920 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-kernel\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.716401 kubelet[2644]: I0307 02:07:41.715942 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-kube-api-access-tpf4s\" (UniqueName: \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-kube-api-access-tpf4s\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.716401 kubelet[2644]: I0307 02:07:41.715948 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-kernel" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.716401 kubelet[2644]: I0307 02:07:41.715960 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-hubble-tls\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.716824 kubelet[2644]: I0307 02:07:41.716653 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cni-path\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cni-path\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.716905 kubelet[2644]: I0307 02:07:41.716840 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-config-path\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.717275 kubelet[2644]: I0307 02:07:41.717071 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-etc-cni-netd\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.719499 kubelet[2644]: I0307 02:07:41.717326 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cni-path" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.719499 kubelet[2644]: I0307 02:07:41.717482 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-etc-cni-netd" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.719499 kubelet[2644]: I0307 02:07:41.719261 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/bc4d3e66-626e-445c-8828-cb0a16044b6f-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc4d3e66-626e-445c-8828-cb0a16044b6f-clustermesh-secrets\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.719499 kubelet[2644]: I0307 02:07:41.719306 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-xtables-lock\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.719499 kubelet[2644]: I0307 02:07:41.719326 2644 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-hostproc\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-hostproc\") pod \"bc4d3e66-626e-445c-8828-cb0a16044b6f\" (UID: \"bc4d3e66-626e-445c-8828-cb0a16044b6f\") " Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719365 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pcdx2\" (UniqueName: \"kubernetes.io/projected/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-kube-api-access-pcdx2\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719376 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719384 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80da2ab2-00ba-4c2c-9275-84bf86c3ce95-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719392 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719399 2644 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719407 2644 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719415 2644 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719664 kubelet[2644]: I0307 02:07:41.719422 2644 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719990 kubelet[2644]: I0307 02:07:41.719430 2644 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719990 kubelet[2644]: I0307 02:07:41.719440 2644 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.719990 kubelet[2644]: I0307 02:07:41.719464 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-hostproc" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.719990 kubelet[2644]: I0307 02:07:41.719481 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-xtables-lock" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 02:07:41.720841 kubelet[2644]: I0307 02:07:41.720752 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-kube-api-access-tpf4s" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "kube-api-access-tpf4s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:07:41.722012 kubelet[2644]: I0307 02:07:41.721967 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-hubble-tls" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 02:07:41.724272 kubelet[2644]: I0307 02:07:41.724073 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4d3e66-626e-445c-8828-cb0a16044b6f-clustermesh-secrets" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 02:07:41.724515 kubelet[2644]: I0307 02:07:41.724447 2644 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-config-path" pod "bc4d3e66-626e-445c-8828-cb0a16044b6f" (UID: "bc4d3e66-626e-445c-8828-cb0a16044b6f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 02:07:41.778120 kubelet[2644]: I0307 02:07:41.774616 2644 scope.go:122] "RemoveContainer" containerID="f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410" Mar 7 02:07:41.778385 containerd[1473]: time="2026-03-07T02:07:41.776927099Z" level=info msg="RemoveContainer for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\"" Mar 7 02:07:41.782538 containerd[1473]: time="2026-03-07T02:07:41.782488786Z" level=info msg="RemoveContainer for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" returns successfully" Mar 7 02:07:41.783066 kubelet[2644]: I0307 02:07:41.782868 2644 scope.go:122] "RemoveContainer" containerID="f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e" Mar 7 02:07:41.785381 containerd[1473]: time="2026-03-07T02:07:41.784966402Z" level=info msg="RemoveContainer for \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\"" Mar 7 02:07:41.786901 systemd[1]: Removed slice kubepods-burstable-podbc4d3e66_626e_445c_8828_cb0a16044b6f.slice - libcontainer container kubepods-burstable-podbc4d3e66_626e_445c_8828_cb0a16044b6f.slice. Mar 7 02:07:41.787014 systemd[1]: kubepods-burstable-podbc4d3e66_626e_445c_8828_cb0a16044b6f.slice: Consumed 22.045s CPU time. Mar 7 02:07:41.788912 systemd[1]: Removed slice kubepods-besteffort-pod80da2ab2_00ba_4c2c_9275_84bf86c3ce95.slice - libcontainer container kubepods-besteffort-pod80da2ab2_00ba_4c2c_9275_84bf86c3ce95.slice. Mar 7 02:07:41.789345 systemd[1]: kubepods-besteffort-pod80da2ab2_00ba_4c2c_9275_84bf86c3ce95.slice: Consumed 2.211s CPU time. Mar 7 02:07:41.790871 containerd[1473]: time="2026-03-07T02:07:41.790769192Z" level=info msg="RemoveContainer for \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\" returns successfully" Mar 7 02:07:41.791007 kubelet[2644]: I0307 02:07:41.790981 2644 scope.go:122] "RemoveContainer" containerID="c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06" Mar 7 02:07:41.792901 containerd[1473]: time="2026-03-07T02:07:41.792627382Z" level=info msg="RemoveContainer for \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\"" Mar 7 02:07:41.797922 containerd[1473]: time="2026-03-07T02:07:41.797810123Z" level=info msg="RemoveContainer for \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\" returns successfully" Mar 7 02:07:41.798531 kubelet[2644]: I0307 02:07:41.798446 2644 scope.go:122] "RemoveContainer" containerID="595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf" Mar 7 02:07:41.799980 containerd[1473]: time="2026-03-07T02:07:41.799935947Z" level=info msg="RemoveContainer for \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\"" Mar 7 02:07:41.807329 containerd[1473]: time="2026-03-07T02:07:41.807099530Z" level=info msg="RemoveContainer for \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\" returns successfully" Mar 7 02:07:41.807627 kubelet[2644]: I0307 02:07:41.807491 2644 scope.go:122] "RemoveContainer" containerID="2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325" Mar 7 02:07:41.810164 containerd[1473]: time="2026-03-07T02:07:41.809943705Z" level=info msg="RemoveContainer for \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\"" Mar 7 02:07:41.815344 containerd[1473]: time="2026-03-07T02:07:41.815067689Z" level=info msg="RemoveContainer for \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\" returns successfully" Mar 7 02:07:41.817381 kubelet[2644]: I0307 02:07:41.816540 2644 scope.go:122] "RemoveContainer" containerID="f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410" Mar 7 02:07:41.817692 containerd[1473]: time="2026-03-07T02:07:41.817617139Z" level=error msg="ContainerStatus for \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\": not found" Mar 7 02:07:41.818710 kubelet[2644]: E0307 02:07:41.818675 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\": not found" containerID="f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410" Mar 7 02:07:41.819042 kubelet[2644]: I0307 02:07:41.818869 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410"} err="failed to get container status \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\": rpc error: code = NotFound desc = an error occurred when try to find container \"f746e26bc1912a41ca7d6028a69ba1fe4f6faffaa523360dbffdedae17f48410\": not found" Mar 7 02:07:41.819187 kubelet[2644]: I0307 02:07:41.819111 2644 scope.go:122] "RemoveContainer" containerID="f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e" Mar 7 02:07:41.819588 kubelet[2644]: I0307 02:07:41.819535 2644 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpf4s\" (UniqueName: \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-kube-api-access-tpf4s\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.819588 kubelet[2644]: I0307 02:07:41.819559 2644 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc4d3e66-626e-445c-8828-cb0a16044b6f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.819588 kubelet[2644]: I0307 02:07:41.819576 2644 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc4d3e66-626e-445c-8828-cb0a16044b6f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.819588 kubelet[2644]: I0307 02:07:41.819588 2644 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc4d3e66-626e-445c-8828-cb0a16044b6f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.819776 kubelet[2644]: I0307 02:07:41.819600 2644 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.819776 kubelet[2644]: I0307 02:07:41.819710 2644 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc4d3e66-626e-445c-8828-cb0a16044b6f-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 7 02:07:41.819857 containerd[1473]: time="2026-03-07T02:07:41.819705462Z" level=error msg="ContainerStatus for \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\": not found" Mar 7 02:07:41.819986 kubelet[2644]: E0307 02:07:41.819908 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\": not found" containerID="f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e" Mar 7 02:07:41.819986 kubelet[2644]: I0307 02:07:41.819938 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e"} err="failed to get container status \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4cd76a7b354dd1a4330a98cd7f06f92bb672d063db12d4c596c7b73b9de934e\": not found" Mar 7 02:07:41.819986 kubelet[2644]: I0307 02:07:41.819958 2644 scope.go:122] "RemoveContainer" containerID="c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06" Mar 7 02:07:41.820276 containerd[1473]: time="2026-03-07T02:07:41.820094255Z" level=error msg="ContainerStatus for \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\": not found" Mar 7 02:07:41.820552 kubelet[2644]: E0307 02:07:41.820421 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\": not found" containerID="c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06" Mar 7 02:07:41.820552 kubelet[2644]: I0307 02:07:41.820484 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06"} err="failed to get container status \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\": rpc error: code = NotFound desc = an error occurred when try to find container \"c15c2526c987910e586af507e265d39c3477628d175025c51dc431ad5dacaf06\": not found" Mar 7 02:07:41.820552 kubelet[2644]: I0307 02:07:41.820500 2644 scope.go:122] "RemoveContainer" containerID="595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf" Mar 7 02:07:41.820777 kubelet[2644]: E0307 02:07:41.820753 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\": not found" containerID="595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf" Mar 7 02:07:41.820777 kubelet[2644]: I0307 02:07:41.820769 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf"} err="failed to get container status \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\": not found" Mar 7 02:07:41.820777 kubelet[2644]: I0307 02:07:41.820780 2644 scope.go:122] "RemoveContainer" containerID="2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325" Mar 7 02:07:41.820896 containerd[1473]: time="2026-03-07T02:07:41.820638761Z" level=error msg="ContainerStatus for \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"595db9abcfe5a48aaf5e998207c5d3f840fc6a543204d10d59534ad168ad5fdf\": not found" Mar 7 02:07:41.820952 containerd[1473]: time="2026-03-07T02:07:41.820905582Z" level=error msg="ContainerStatus for \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\": not found" Mar 7 02:07:41.821276 kubelet[2644]: E0307 02:07:41.821073 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\": not found" containerID="2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325" Mar 7 02:07:41.821276 kubelet[2644]: I0307 02:07:41.821182 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325"} err="failed to get container status \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f3fd2d560b5ff1835b10c74eb76dcd4411c964836932fa3e9a5ed87a9294325\": not found" Mar 7 02:07:41.821276 kubelet[2644]: I0307 02:07:41.821244 2644 scope.go:122] "RemoveContainer" containerID="cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d" Mar 7 02:07:41.823771 containerd[1473]: time="2026-03-07T02:07:41.823678977Z" level=info msg="RemoveContainer for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\"" Mar 7 02:07:41.830293 containerd[1473]: time="2026-03-07T02:07:41.830035697Z" level=info msg="RemoveContainer for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" returns successfully" Mar 7 02:07:41.830415 kubelet[2644]: I0307 02:07:41.830369 2644 scope.go:122] "RemoveContainer" containerID="cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d" Mar 7 02:07:41.830917 containerd[1473]: time="2026-03-07T02:07:41.830564213Z" level=error msg="ContainerStatus for \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\": not found" Mar 7 02:07:41.831009 kubelet[2644]: E0307 02:07:41.830813 2644 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\": not found" containerID="cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d" Mar 7 02:07:41.831009 kubelet[2644]: I0307 02:07:41.830842 2644 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d"} err="failed to get container status \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\": rpc error: code = NotFound desc = an error occurred when try to find container \"cfbb0adeb5a94be565cd412262dd0fa66dde372053e5797aeb46671aadf8c11d\": not found" Mar 7 02:07:42.307882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685-rootfs.mount: Deactivated successfully. Mar 7 02:07:42.308051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5289e09a66d1e51467b125f2f86fe7fcc09e7ffc3705dd25e0b4a141768f460-rootfs.mount: Deactivated successfully. Mar 7 02:07:42.308129 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d83ef91e2542ee19ae2639079b1db2f8c43f5a8d812f7432538b9ebe8b0d685-shm.mount: Deactivated successfully. Mar 7 02:07:42.308341 systemd[1]: var-lib-kubelet-pods-bc4d3e66\x2d626e\x2d445c\x2d8828\x2dcb0a16044b6f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtpf4s.mount: Deactivated successfully. Mar 7 02:07:42.308434 systemd[1]: var-lib-kubelet-pods-80da2ab2\x2d00ba\x2d4c2c\x2d9275\x2d84bf86c3ce95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpcdx2.mount: Deactivated successfully. Mar 7 02:07:42.308552 systemd[1]: var-lib-kubelet-pods-bc4d3e66\x2d626e\x2d445c\x2d8828\x2dcb0a16044b6f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 02:07:42.308637 systemd[1]: var-lib-kubelet-pods-bc4d3e66\x2d626e\x2d445c\x2d8828\x2dcb0a16044b6f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 02:07:42.328704 kubelet[2644]: E0307 02:07:42.328603 2644 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 02:07:43.251942 sshd[4521]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:43.263927 systemd[1]: sshd@26-10.0.0.146:22-10.0.0.1:52190.service: Deactivated successfully. Mar 7 02:07:43.267437 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 02:07:43.270341 systemd-logind[1461]: Session 27 logged out. Waiting for processes to exit. Mar 7 02:07:43.280082 systemd[1]: Started sshd@27-10.0.0.146:22-10.0.0.1:56568.service - OpenSSH per-connection server daemon (10.0.0.1:56568). Mar 7 02:07:43.283684 systemd-logind[1461]: Removed session 27. Mar 7 02:07:43.353469 sshd[4683]: Accepted publickey for core from 10.0.0.1 port 56568 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:43.357006 sshd[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:43.375469 systemd-logind[1461]: New session 28 of user core. Mar 7 02:07:43.390726 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 02:07:43.601384 kubelet[2644]: I0307 02:07:43.596995 2644 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="80da2ab2-00ba-4c2c-9275-84bf86c3ce95" path="/var/lib/kubelet/pods/80da2ab2-00ba-4c2c-9275-84bf86c3ce95/volumes" Mar 7 02:07:43.601384 kubelet[2644]: I0307 02:07:43.601096 2644 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bc4d3e66-626e-445c-8828-cb0a16044b6f" path="/var/lib/kubelet/pods/bc4d3e66-626e-445c-8828-cb0a16044b6f/volumes" Mar 7 02:07:44.505062 sshd[4683]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:44.531005 systemd[1]: sshd@27-10.0.0.146:22-10.0.0.1:56568.service: Deactivated successfully. Mar 7 02:07:44.538029 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 02:07:44.546133 systemd-logind[1461]: Session 28 logged out. Waiting for processes to exit. Mar 7 02:07:44.575097 systemd[1]: Started sshd@28-10.0.0.146:22-10.0.0.1:56580.service - OpenSSH per-connection server daemon (10.0.0.1:56580). Mar 7 02:07:44.586954 systemd-logind[1461]: Removed session 28. Mar 7 02:07:44.638129 systemd[1]: Created slice kubepods-burstable-pod8933ab0b_74e4_4143_b703_d89ac488f87b.slice - libcontainer container kubepods-burstable-pod8933ab0b_74e4_4143_b703_d89ac488f87b.slice. Mar 7 02:07:44.641749 kubelet[2644]: I0307 02:07:44.641667 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-cilium-cgroup\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642341 kubelet[2644]: I0307 02:07:44.641759 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-cilium-run\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642341 kubelet[2644]: I0307 02:07:44.641782 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-bpf-maps\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642341 kubelet[2644]: I0307 02:07:44.641803 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-host-proc-sys-net\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642341 kubelet[2644]: I0307 02:07:44.641828 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-hostproc\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642341 kubelet[2644]: I0307 02:07:44.641851 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8933ab0b-74e4-4143-b703-d89ac488f87b-cilium-config-path\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642341 kubelet[2644]: I0307 02:07:44.641889 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgbq\" (UniqueName: \"kubernetes.io/projected/8933ab0b-74e4-4143-b703-d89ac488f87b-kube-api-access-2qgbq\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642559 kubelet[2644]: I0307 02:07:44.641915 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-etc-cni-netd\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642559 kubelet[2644]: I0307 02:07:44.641938 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-cni-path\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642559 kubelet[2644]: I0307 02:07:44.641962 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-xtables-lock\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642559 kubelet[2644]: I0307 02:07:44.641986 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8933ab0b-74e4-4143-b703-d89ac488f87b-clustermesh-secrets\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642559 kubelet[2644]: I0307 02:07:44.642018 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8933ab0b-74e4-4143-b703-d89ac488f87b-cilium-ipsec-secrets\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642559 kubelet[2644]: I0307 02:07:44.642041 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-lib-modules\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642791 kubelet[2644]: I0307 02:07:44.642119 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8933ab0b-74e4-4143-b703-d89ac488f87b-host-proc-sys-kernel\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.642791 kubelet[2644]: I0307 02:07:44.642146 2644 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8933ab0b-74e4-4143-b703-d89ac488f87b-hubble-tls\") pod \"cilium-w9ncl\" (UID: \"8933ab0b-74e4-4143-b703-d89ac488f87b\") " pod="kube-system/cilium-w9ncl" Mar 7 02:07:44.690984 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 56580 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:44.693490 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:44.706456 systemd-logind[1461]: New session 29 of user core. Mar 7 02:07:44.721973 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 02:07:44.797687 sshd[4698]: pam_unix(sshd:session): session closed for user core Mar 7 02:07:44.820360 systemd[1]: sshd@28-10.0.0.146:22-10.0.0.1:56580.service: Deactivated successfully. Mar 7 02:07:44.826168 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 02:07:44.829549 systemd-logind[1461]: Session 29 logged out. Waiting for processes to exit. Mar 7 02:07:44.847463 systemd[1]: Started sshd@29-10.0.0.146:22-10.0.0.1:56594.service - OpenSSH per-connection server daemon (10.0.0.1:56594). Mar 7 02:07:44.851462 systemd-logind[1461]: Removed session 29. Mar 7 02:07:44.914848 sshd[4710]: Accepted publickey for core from 10.0.0.1 port 56594 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 02:07:44.916085 sshd[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 02:07:44.932710 systemd-logind[1461]: New session 30 of user core. Mar 7 02:07:44.940890 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 02:07:44.956286 kubelet[2644]: E0307 02:07:44.956084 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:44.957458 containerd[1473]: time="2026-03-07T02:07:44.957411962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9ncl,Uid:8933ab0b-74e4-4143-b703-d89ac488f87b,Namespace:kube-system,Attempt:0,}" Mar 7 02:07:45.027309 containerd[1473]: time="2026-03-07T02:07:45.023918226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 02:07:45.027309 containerd[1473]: time="2026-03-07T02:07:45.024065644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 02:07:45.027309 containerd[1473]: time="2026-03-07T02:07:45.024092885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:07:45.027309 containerd[1473]: time="2026-03-07T02:07:45.024339629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 02:07:45.093857 systemd[1]: Started cri-containerd-2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee.scope - libcontainer container 2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee. Mar 7 02:07:45.183736 containerd[1473]: time="2026-03-07T02:07:45.183628879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9ncl,Uid:8933ab0b-74e4-4143-b703-d89ac488f87b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\"" Mar 7 02:07:45.193771 kubelet[2644]: E0307 02:07:45.193551 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:45.205065 containerd[1473]: time="2026-03-07T02:07:45.204939332Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 02:07:45.247326 containerd[1473]: time="2026-03-07T02:07:45.246680129Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3\"" Mar 7 02:07:45.251573 containerd[1473]: time="2026-03-07T02:07:45.248501521Z" level=info msg="StartContainer for \"588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3\"" Mar 7 02:07:45.324646 systemd[1]: Started cri-containerd-588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3.scope - libcontainer container 588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3. Mar 7 02:07:45.407154 containerd[1473]: time="2026-03-07T02:07:45.406048100Z" level=info msg="StartContainer for \"588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3\" returns successfully" Mar 7 02:07:45.442860 systemd[1]: cri-containerd-588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3.scope: Deactivated successfully. Mar 7 02:07:45.540032 containerd[1473]: time="2026-03-07T02:07:45.539818490Z" level=info msg="shim disconnected" id=588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3 namespace=k8s.io Mar 7 02:07:45.540032 containerd[1473]: time="2026-03-07T02:07:45.539923457Z" level=warning msg="cleaning up after shim disconnected" id=588753efe22435680900a61c5db935e389ea22d2a020c17eaec53e114cc1f9a3 namespace=k8s.io Mar 7 02:07:45.540032 containerd[1473]: time="2026-03-07T02:07:45.539941802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:45.827800 kubelet[2644]: E0307 02:07:45.827693 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:45.885957 containerd[1473]: time="2026-03-07T02:07:45.885788395Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 02:07:45.937445 containerd[1473]: time="2026-03-07T02:07:45.935823115Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726\"" Mar 7 02:07:45.937445 containerd[1473]: time="2026-03-07T02:07:45.937015512Z" level=info msg="StartContainer for \"14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726\"" Mar 7 02:07:46.010653 systemd[1]: Started cri-containerd-14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726.scope - libcontainer container 14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726. Mar 7 02:07:46.103418 containerd[1473]: time="2026-03-07T02:07:46.100019530Z" level=info msg="StartContainer for \"14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726\" returns successfully" Mar 7 02:07:46.125882 systemd[1]: cri-containerd-14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726.scope: Deactivated successfully. Mar 7 02:07:46.239731 containerd[1473]: time="2026-03-07T02:07:46.239107778Z" level=info msg="shim disconnected" id=14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726 namespace=k8s.io Mar 7 02:07:46.239731 containerd[1473]: time="2026-03-07T02:07:46.239176577Z" level=warning msg="cleaning up after shim disconnected" id=14fc24475be5f37236a61a65d95c559ba7ed653a3634572c80bb219675e9b726 namespace=k8s.io Mar 7 02:07:46.244351 containerd[1473]: time="2026-03-07T02:07:46.242329226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:46.847477 kubelet[2644]: E0307 02:07:46.844669 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:46.891874 containerd[1473]: time="2026-03-07T02:07:46.887393773Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 02:07:47.005037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901026675.mount: Deactivated successfully. Mar 7 02:07:47.016144 containerd[1473]: time="2026-03-07T02:07:47.015937613Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c\"" Mar 7 02:07:47.017656 containerd[1473]: time="2026-03-07T02:07:47.017411423Z" level=info msg="StartContainer for \"e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c\"" Mar 7 02:07:47.127868 systemd[1]: Started cri-containerd-e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c.scope - libcontainer container e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c. Mar 7 02:07:47.276533 containerd[1473]: time="2026-03-07T02:07:47.275081318Z" level=info msg="StartContainer for \"e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c\" returns successfully" Mar 7 02:07:47.301850 systemd[1]: cri-containerd-e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c.scope: Deactivated successfully. Mar 7 02:07:47.341471 kubelet[2644]: E0307 02:07:47.337041 2644 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 02:07:47.470379 containerd[1473]: time="2026-03-07T02:07:47.462608868Z" level=info msg="shim disconnected" id=e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c namespace=k8s.io Mar 7 02:07:47.470379 containerd[1473]: time="2026-03-07T02:07:47.463414586Z" level=warning msg="cleaning up after shim disconnected" id=e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c namespace=k8s.io Mar 7 02:07:47.470379 containerd[1473]: time="2026-03-07T02:07:47.463432620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:47.761363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4993a8d5ed80c5cabc586909556bad1fa53a2353c7804e5f5df7765b111b56c-rootfs.mount: Deactivated successfully. Mar 7 02:07:47.873796 kubelet[2644]: E0307 02:07:47.863171 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:47.888113 containerd[1473]: time="2026-03-07T02:07:47.887752385Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 02:07:47.957064 containerd[1473]: time="2026-03-07T02:07:47.956948811Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf\"" Mar 7 02:07:47.958177 containerd[1473]: time="2026-03-07T02:07:47.958141799Z" level=info msg="StartContainer for \"748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf\"" Mar 7 02:07:48.038749 systemd[1]: Started cri-containerd-748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf.scope - libcontainer container 748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf. Mar 7 02:07:48.118532 systemd[1]: cri-containerd-748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf.scope: Deactivated successfully. Mar 7 02:07:48.128121 containerd[1473]: time="2026-03-07T02:07:48.127153727Z" level=info msg="StartContainer for \"748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf\" returns successfully" Mar 7 02:07:48.197611 containerd[1473]: time="2026-03-07T02:07:48.197381225Z" level=info msg="shim disconnected" id=748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf namespace=k8s.io Mar 7 02:07:48.197611 containerd[1473]: time="2026-03-07T02:07:48.197478479Z" level=warning msg="cleaning up after shim disconnected" id=748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf namespace=k8s.io Mar 7 02:07:48.197611 containerd[1473]: time="2026-03-07T02:07:48.197495230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 02:07:48.761446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-748fc4bcac556e0aebcd32fda9fa19c6648c093f5298a4bb8bd4df48a24f4bdf-rootfs.mount: Deactivated successfully. Mar 7 02:07:48.882666 kubelet[2644]: E0307 02:07:48.881756 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:48.895011 containerd[1473]: time="2026-03-07T02:07:48.894900162Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 02:07:48.963515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023582854.mount: Deactivated successfully. Mar 7 02:07:48.971751 containerd[1473]: time="2026-03-07T02:07:48.971587868Z" level=info msg="CreateContainer within sandbox \"2a631d4be2adfa83d6423e05f064c47e450452436534696e60ce1daa7431f4ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152\"" Mar 7 02:07:48.973106 containerd[1473]: time="2026-03-07T02:07:48.973002234Z" level=info msg="StartContainer for \"106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152\"" Mar 7 02:07:49.063735 systemd[1]: Started cri-containerd-106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152.scope - libcontainer container 106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152. Mar 7 02:07:49.166411 containerd[1473]: time="2026-03-07T02:07:49.165500583Z" level=info msg="StartContainer for \"106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152\" returns successfully" Mar 7 02:07:49.902522 kubelet[2644]: E0307 02:07:49.902463 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:49.952455 kubelet[2644]: I0307 02:07:49.951049 2644 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-w9ncl" podStartSLOduration=5.951033924 podStartE2EDuration="5.951033924s" podCreationTimestamp="2026-03-07 02:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 02:07:49.948187499 +0000 UTC m=+204.885277620" watchObservedRunningTime="2026-03-07 02:07:49.951033924 +0000 UTC m=+204.888124014" Mar 7 02:07:50.438924 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 02:07:50.953671 kubelet[2644]: E0307 02:07:50.953152 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:51.223748 kubelet[2644]: I0307 02:07:51.223505 2644 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T02:07:51Z","lastTransitionTime":"2026-03-07T02:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 02:07:56.800575 systemd-networkd[1383]: lxc_health: Link UP Mar 7 02:07:56.820151 systemd-networkd[1383]: lxc_health: Gained carrier Mar 7 02:07:56.954367 kubelet[2644]: E0307 02:07:56.954190 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:57.932565 kubelet[2644]: E0307 02:07:57.931931 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:07:58.135772 systemd-networkd[1383]: lxc_health: Gained IPv6LL Mar 7 02:07:58.807137 systemd[1]: run-containerd-runc-k8s.io-106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152-runc.t3wTgG.mount: Deactivated successfully. Mar 7 02:07:58.946882 kubelet[2644]: E0307 02:07:58.941136 2644 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 02:08:03.413482 systemd[1]: run-containerd-runc-k8s.io-106843792b0cc473f6893d51bf045d479e20d2abaf0b3ae00f83d67fb8a19152-runc.DwJ94b.mount: Deactivated successfully. Mar 7 02:08:03.604551 sshd[4710]: pam_unix(sshd:session): session closed for user core Mar 7 02:08:03.615529 systemd[1]: sshd@29-10.0.0.146:22-10.0.0.1:56594.service: Deactivated successfully. Mar 7 02:08:03.621739 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 02:08:03.624465 systemd-logind[1461]: Session 30 logged out. Waiting for processes to exit. Mar 7 02:08:03.627486 systemd-logind[1461]: Removed session 30.