Mar 4 01:14:59.439066 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 01:14:59.439100 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:14:59.439118 kernel: BIOS-provided physical RAM map: Mar 4 01:14:59.439128 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 01:14:59.439174 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 01:14:59.439185 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 01:14:59.439197 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 4 01:14:59.439206 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 4 01:14:59.439216 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 01:14:59.439230 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 01:14:59.439240 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 01:14:59.439249 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 01:14:59.439258 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 4 01:14:59.439267 kernel: NX (Execute Disable) protection: active Mar 4 01:14:59.439279 kernel: APIC: Static calls initialized Mar 4 01:14:59.439293 kernel: SMBIOS 2.8 present. Mar 4 01:14:59.439304 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 4 01:14:59.439314 kernel: Hypervisor detected: KVM Mar 4 01:14:59.439324 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 01:14:59.439335 kernel: kvm-clock: using sched offset of 6091336976 cycles Mar 4 01:14:59.439347 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 01:14:59.439357 kernel: tsc: Detected 2445.426 MHz processor Mar 4 01:14:59.439368 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 01:14:59.439380 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 01:14:59.439396 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 4 01:14:59.439407 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 01:14:59.439419 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 01:14:59.439430 kernel: Using GB pages for direct mapping Mar 4 01:14:59.439441 kernel: ACPI: Early table checksum verification disabled Mar 4 01:14:59.439451 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 4 01:14:59.439462 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439473 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439484 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439500 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 4 01:14:59.439511 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439522 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439533 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439543 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 01:14:59.439553 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 4 01:14:59.439563 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 4 01:14:59.439581 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 4 01:14:59.439596 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 4 01:14:59.439607 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 4 01:14:59.439618 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 4 01:14:59.439629 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 4 01:14:59.439640 kernel: No NUMA configuration found Mar 4 01:14:59.439651 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 4 01:14:59.439666 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 4 01:14:59.439676 kernel: Zone ranges: Mar 4 01:14:59.439687 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 01:14:59.439698 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 4 01:14:59.439710 kernel: Normal empty Mar 4 01:14:59.439721 kernel: Movable zone start for each node Mar 4 01:14:59.439731 kernel: Early memory node ranges Mar 4 01:14:59.439742 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 01:14:59.439753 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 4 01:14:59.439764 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 4 01:14:59.439780 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 01:14:59.439791 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 01:14:59.439802 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 4 01:14:59.439813 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 01:14:59.440029 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 01:14:59.440044 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 01:14:59.440056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 01:14:59.440067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 01:14:59.440078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 01:14:59.440096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 01:14:59.440108 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 01:14:59.440120 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 01:14:59.440131 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 4 01:14:59.440143 kernel: TSC deadline timer available Mar 4 01:14:59.440154 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 4 01:14:59.440165 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 01:14:59.440176 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 4 01:14:59.440188 kernel: kvm-guest: setup PV sched yield Mar 4 01:14:59.440204 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 01:14:59.440216 kernel: Booting paravirtualized kernel on KVM Mar 4 01:14:59.440228 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 01:14:59.440240 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 4 01:14:59.440252 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 4 01:14:59.440264 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 4 01:14:59.440275 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 4 01:14:59.440286 kernel: kvm-guest: PV spinlocks enabled Mar 4 01:14:59.440297 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 01:14:59.440316 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:14:59.440328 kernel: random: crng init done Mar 4 01:14:59.440340 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 4 01:14:59.440351 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 01:14:59.440362 kernel: Fallback order for Node 0: 0 Mar 4 01:14:59.440372 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 4 01:14:59.440383 kernel: Policy zone: DMA32 Mar 4 01:14:59.440394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 01:14:59.440410 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 4 01:14:59.440422 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 4 01:14:59.440433 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 01:14:59.440444 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 01:14:59.440455 kernel: Dynamic Preempt: voluntary Mar 4 01:14:59.440466 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 01:14:59.440478 kernel: rcu: RCU event tracing is enabled. Mar 4 01:14:59.440489 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 4 01:14:59.440501 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 01:14:59.440516 kernel: Rude variant of Tasks RCU enabled. Mar 4 01:14:59.440527 kernel: Tracing variant of Tasks RCU enabled. Mar 4 01:14:59.440539 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 01:14:59.440551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 4 01:14:59.440562 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 4 01:14:59.440574 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 01:14:59.440585 kernel: Console: colour VGA+ 80x25 Mar 4 01:14:59.440596 kernel: printk: console [ttyS0] enabled Mar 4 01:14:59.440607 kernel: ACPI: Core revision 20230628 Mar 4 01:14:59.440623 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 4 01:14:59.440635 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 01:14:59.440645 kernel: x2apic enabled Mar 4 01:14:59.440656 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 01:14:59.440667 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 4 01:14:59.440678 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 4 01:14:59.440690 kernel: kvm-guest: setup PV IPIs Mar 4 01:14:59.440701 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 4 01:14:59.440729 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 4 01:14:59.440741 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 4 01:14:59.440752 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 01:14:59.440764 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 4 01:14:59.440779 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 4 01:14:59.440790 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 01:14:59.440801 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 01:14:59.440813 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 01:14:59.440903 kernel: Speculative Store Bypass: Vulnerable Mar 4 01:14:59.440987 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 4 01:14:59.441001 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 4 01:14:59.441014 kernel: active return thunk: srso_alias_return_thunk Mar 4 01:14:59.441026 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 4 01:14:59.441038 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 4 01:14:59.441051 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 4 01:14:59.441063 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 01:14:59.441075 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 01:14:59.441092 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 01:14:59.441103 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 01:14:59.441116 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 4 01:14:59.441128 kernel: Freeing SMP alternatives memory: 32K Mar 4 01:14:59.441140 kernel: pid_max: default: 32768 minimum: 301 Mar 4 01:14:59.441151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 01:14:59.441163 kernel: landlock: Up and running. Mar 4 01:14:59.441176 kernel: SELinux: Initializing. Mar 4 01:14:59.441188 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:14:59.441204 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 4 01:14:59.441217 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 4 01:14:59.441229 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:14:59.441241 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:14:59.441254 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 4 01:14:59.441266 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 4 01:14:59.441278 kernel: signal: max sigframe size: 1776 Mar 4 01:14:59.441290 kernel: rcu: Hierarchical SRCU implementation. Mar 4 01:14:59.441302 kernel: rcu: Max phase no-delay instances is 400. Mar 4 01:14:59.441318 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 01:14:59.441329 kernel: smp: Bringing up secondary CPUs ... Mar 4 01:14:59.441340 kernel: smpboot: x86: Booting SMP configuration: Mar 4 01:14:59.441352 kernel: .... node #0, CPUs: #1 #2 #3 Mar 4 01:14:59.441364 kernel: smp: Brought up 1 node, 4 CPUs Mar 4 01:14:59.441375 kernel: smpboot: Max logical packages: 1 Mar 4 01:14:59.441387 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 4 01:14:59.441399 kernel: devtmpfs: initialized Mar 4 01:14:59.441410 kernel: x86/mm: Memory block size: 128MB Mar 4 01:14:59.441426 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 01:14:59.441437 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 4 01:14:59.441449 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 01:14:59.441460 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 01:14:59.441472 kernel: audit: initializing netlink subsys (disabled) Mar 4 01:14:59.441483 kernel: audit: type=2000 audit(1772586898.144:1): state=initialized audit_enabled=0 res=1 Mar 4 01:14:59.441494 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 01:14:59.441506 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 01:14:59.441518 kernel: cpuidle: using governor menu Mar 4 01:14:59.441534 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 01:14:59.441546 kernel: dca service started, version 1.12.1 Mar 4 01:14:59.441558 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 01:14:59.441570 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 01:14:59.441581 kernel: PCI: Using configuration type 1 for base access Mar 4 01:14:59.441593 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 01:14:59.441604 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 01:14:59.441616 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 01:14:59.441628 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 01:14:59.441645 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 01:14:59.441656 kernel: ACPI: Added _OSI(Module Device) Mar 4 01:14:59.441668 kernel: ACPI: Added _OSI(Processor Device) Mar 4 01:14:59.441679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 01:14:59.441691 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 01:14:59.441702 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 01:14:59.441714 kernel: ACPI: Interpreter enabled Mar 4 01:14:59.441725 kernel: ACPI: PM: (supports S0 S3 S5) Mar 4 01:14:59.441737 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 01:14:59.441753 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 01:14:59.441765 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 01:14:59.441776 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 01:14:59.441788 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 01:14:59.442199 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 01:14:59.442406 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 4 01:14:59.442603 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 4 01:14:59.442628 kernel: PCI host bridge to bus 0000:00 Mar 4 01:14:59.442906 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 01:14:59.443187 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 01:14:59.443373 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 01:14:59.443547 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 4 01:14:59.443725 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 01:14:59.444022 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 4 01:14:59.444225 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 01:14:59.444622 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 01:14:59.444906 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 4 01:14:59.445146 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 4 01:14:59.445347 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 4 01:14:59.445551 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 4 01:14:59.445809 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 01:14:59.446160 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 4 01:14:59.446372 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 01:14:59.446575 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 4 01:14:59.446772 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 4 01:14:59.447144 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 4 01:14:59.447338 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 01:14:59.447528 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 4 01:14:59.447721 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 4 01:14:59.448049 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 4 01:14:59.448255 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 4 01:14:59.448452 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 4 01:14:59.448693 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 4 01:14:59.449026 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 4 01:14:59.449249 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 01:14:59.449444 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 01:14:59.449658 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 01:14:59.449994 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 4 01:14:59.450196 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 4 01:14:59.450398 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 01:14:59.450587 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 01:14:59.450613 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 01:14:59.450627 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 01:14:59.450639 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 01:14:59.450651 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 01:14:59.450664 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 01:14:59.450676 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 01:14:59.450687 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 01:14:59.450699 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 01:14:59.450711 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 01:14:59.450729 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 01:14:59.450741 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 01:14:59.450752 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 01:14:59.450763 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 01:14:59.450774 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 01:14:59.450786 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 01:14:59.450798 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 01:14:59.450809 kernel: iommu: Default domain type: Translated Mar 4 01:14:59.450898 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 01:14:59.450978 kernel: PCI: Using ACPI for IRQ routing Mar 4 01:14:59.450991 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 01:14:59.451003 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 01:14:59.451014 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 4 01:14:59.451214 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 01:14:59.451422 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 01:14:59.451627 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 01:14:59.451647 kernel: vgaarb: loaded Mar 4 01:14:59.451666 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 4 01:14:59.451678 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 4 01:14:59.451690 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 01:14:59.451703 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 01:14:59.451715 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 01:14:59.451728 kernel: pnp: PnP ACPI init Mar 4 01:14:59.452102 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 01:14:59.452125 kernel: pnp: PnP ACPI: found 6 devices Mar 4 01:14:59.452144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 01:14:59.452156 kernel: NET: Registered PF_INET protocol family Mar 4 01:14:59.452168 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 4 01:14:59.452180 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 4 01:14:59.452192 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 01:14:59.452205 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 01:14:59.452218 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 4 01:14:59.452230 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 4 01:14:59.452242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:14:59.452260 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 4 01:14:59.452272 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 01:14:59.452284 kernel: NET: Registered PF_XDP protocol family Mar 4 01:14:59.452476 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 01:14:59.452658 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 01:14:59.452954 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 01:14:59.453138 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 4 01:14:59.453542 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 01:14:59.454039 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 4 01:14:59.454061 kernel: PCI: CLS 0 bytes, default 64 Mar 4 01:14:59.454075 kernel: Initialise system trusted keyrings Mar 4 01:14:59.454088 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 4 01:14:59.454100 kernel: Key type asymmetric registered Mar 4 01:14:59.454111 kernel: Asymmetric key parser 'x509' registered Mar 4 01:14:59.454123 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 01:14:59.454135 kernel: io scheduler mq-deadline registered Mar 4 01:14:59.454146 kernel: io scheduler kyber registered Mar 4 01:14:59.454164 kernel: io scheduler bfq registered Mar 4 01:14:59.454176 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 01:14:59.454190 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 01:14:59.454202 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 01:14:59.454214 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 01:14:59.454225 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 01:14:59.454237 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 01:14:59.454248 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 01:14:59.454261 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 01:14:59.454273 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 01:14:59.454479 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 4 01:14:59.454501 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 01:14:59.454687 kernel: rtc_cmos 00:04: registered as rtc0 Mar 4 01:14:59.455003 kernel: rtc_cmos 00:04: setting system clock to 2026-03-04T01:14:58 UTC (1772586898) Mar 4 01:14:59.455244 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 4 01:14:59.455264 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 4 01:14:59.455276 kernel: NET: Registered PF_INET6 protocol family Mar 4 01:14:59.455294 kernel: Segment Routing with IPv6 Mar 4 01:14:59.455307 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 01:14:59.455319 kernel: NET: Registered PF_PACKET protocol family Mar 4 01:14:59.455331 kernel: Key type dns_resolver registered Mar 4 01:14:59.455342 kernel: IPI shorthand broadcast: enabled Mar 4 01:14:59.455354 kernel: sched_clock: Marking stable (1217043691, 330385473)->(1951775946, -404346782) Mar 4 01:14:59.455366 kernel: registered taskstats version 1 Mar 4 01:14:59.455377 kernel: Loading compiled-in X.509 certificates Mar 4 01:14:59.455389 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 01:14:59.455405 kernel: Key type .fscrypt registered Mar 4 01:14:59.455416 kernel: Key type fscrypt-provisioning registered Mar 4 01:14:59.455429 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 01:14:59.455440 kernel: ima: Allocated hash algorithm: sha1 Mar 4 01:14:59.455452 kernel: ima: No architecture policies found Mar 4 01:14:59.455464 kernel: clk: Disabling unused clocks Mar 4 01:14:59.455476 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 01:14:59.455488 kernel: Write protecting the kernel read-only data: 36864k Mar 4 01:14:59.455501 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 01:14:59.455518 kernel: Run /init as init process Mar 4 01:14:59.455530 kernel: with arguments: Mar 4 01:14:59.455543 kernel: /init Mar 4 01:14:59.455555 kernel: with environment: Mar 4 01:14:59.455568 kernel: HOME=/ Mar 4 01:14:59.455579 kernel: TERM=linux Mar 4 01:14:59.455594 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:14:59.455609 systemd[1]: Detected virtualization kvm. Mar 4 01:14:59.455627 systemd[1]: Detected architecture x86-64. Mar 4 01:14:59.455640 systemd[1]: Running in initrd. Mar 4 01:14:59.455651 systemd[1]: No hostname configured, using default hostname. Mar 4 01:14:59.455663 systemd[1]: Hostname set to . Mar 4 01:14:59.455676 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:14:59.455689 systemd[1]: Queued start job for default target initrd.target. Mar 4 01:14:59.455701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:14:59.455713 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:14:59.455730 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 01:14:59.455743 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:14:59.455755 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 01:14:59.455768 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 01:14:59.455783 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 01:14:59.455795 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 01:14:59.455808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:14:59.455902 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:14:59.455960 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:14:59.455974 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:14:59.455987 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:14:59.456019 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:14:59.456035 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:14:59.456051 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:14:59.456065 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 01:14:59.456078 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 01:14:59.456091 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:14:59.456104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:14:59.456119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:14:59.456131 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:14:59.456144 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 01:14:59.456157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:14:59.456174 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 01:14:59.456187 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 01:14:59.456200 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:14:59.456215 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:14:59.456228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:14:59.456279 systemd-journald[194]: Collecting audit messages is disabled. Mar 4 01:14:59.456320 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 01:14:59.456335 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:14:59.456350 systemd-journald[194]: Journal started Mar 4 01:14:59.456383 systemd-journald[194]: Runtime Journal (/run/log/journal/2f43525aa0f1410ca791e86fa9a11aa9) is 6.0M, max 48.4M, 42.3M free. Mar 4 01:14:59.460960 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:14:59.465657 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 01:14:59.478701 systemd-modules-load[195]: Inserted module 'overlay' Mar 4 01:14:59.707486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 01:14:59.707530 kernel: Bridge firewalling registered Mar 4 01:14:59.483221 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:14:59.519676 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 4 01:14:59.734073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:14:59.744023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:14:59.752493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:14:59.760181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:14:59.769265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:14:59.791533 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:14:59.793134 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:14:59.794386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:14:59.817134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:14:59.842172 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 01:14:59.862360 dracut-cmdline[226]: dracut-dracut-053 Mar 4 01:14:59.862360 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 01:14:59.847011 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:14:59.847344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:14:59.851326 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:14:59.916809 systemd-resolved[235]: Positive Trust Anchors: Mar 4 01:14:59.916906 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:14:59.916970 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:14:59.919370 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 4 01:14:59.920655 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:14:59.929511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:14:59.967899 kernel: SCSI subsystem initialized Mar 4 01:14:59.980985 kernel: Loading iSCSI transport class v2.0-870. Mar 4 01:14:59.997894 kernel: iscsi: registered transport (tcp) Mar 4 01:15:00.022359 kernel: iscsi: registered transport (qla4xxx) Mar 4 01:15:00.022480 kernel: QLogic iSCSI HBA Driver Mar 4 01:15:00.086068 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 01:15:00.106190 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 01:15:00.141439 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 01:15:00.141509 kernel: device-mapper: uevent: version 1.0.3 Mar 4 01:15:00.144620 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 01:15:00.190960 kernel: raid6: avx2x4 gen() 33760 MB/s Mar 4 01:15:00.209014 kernel: raid6: avx2x2 gen() 30596 MB/s Mar 4 01:15:00.228551 kernel: raid6: avx2x1 gen() 23305 MB/s Mar 4 01:15:00.228620 kernel: raid6: using algorithm avx2x4 gen() 33760 MB/s Mar 4 01:15:00.249185 kernel: raid6: .... xor() 5839 MB/s, rmw enabled Mar 4 01:15:00.249244 kernel: raid6: using avx2x2 recovery algorithm Mar 4 01:15:00.272969 kernel: xor: automatically using best checksumming function avx Mar 4 01:15:00.462970 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 01:15:00.478467 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:15:00.500120 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:15:00.512603 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 4 01:15:00.517602 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:15:00.518968 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 01:15:00.553421 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Mar 4 01:15:00.603313 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:15:00.615103 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:15:00.705224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:15:00.720373 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 01:15:00.744360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 01:15:00.753184 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:15:00.763691 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 01:15:00.763812 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:15:00.770286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:15:00.793880 kernel: libata version 3.00 loaded. Mar 4 01:15:00.802610 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 4 01:15:00.800211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 01:15:00.814323 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 01:15:00.814519 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 01:15:00.818804 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 4 01:15:00.814913 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:15:00.850082 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 01:15:00.850430 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 01:15:00.850588 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 01:15:00.850600 kernel: GPT:9289727 != 19775487 Mar 4 01:15:00.850610 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 01:15:00.850627 kernel: GPT:9289727 != 19775487 Mar 4 01:15:00.850636 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 01:15:00.850646 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:15:00.850655 kernel: AVX2 version of gcm_enc/dec engaged. Mar 4 01:15:00.850665 kernel: scsi host0: ahci Mar 4 01:15:00.815066 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:15:00.862101 kernel: scsi host1: ahci Mar 4 01:15:00.862463 kernel: scsi host2: ahci Mar 4 01:15:00.862627 kernel: AES CTR mode by8 optimization enabled Mar 4 01:15:00.862639 kernel: scsi host3: ahci Mar 4 01:15:00.851785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:15:00.907534 kernel: scsi host4: ahci Mar 4 01:15:00.908134 kernel: scsi host5: ahci Mar 4 01:15:00.908305 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 4 01:15:00.908316 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 4 01:15:00.908326 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 4 01:15:00.908335 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 4 01:15:00.908344 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 4 01:15:00.908353 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 4 01:15:00.908369 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (471) Mar 4 01:15:00.908379 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 4 01:15:00.887516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:15:00.887792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:15:00.912531 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:15:00.931277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:15:00.937108 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:15:00.971766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 01:15:01.144429 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 01:15:01.150562 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:15:01.164748 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 01:15:01.165011 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 01:15:01.180747 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:15:01.220338 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 01:15:01.220404 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 01:15:01.220434 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 01:15:01.202163 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 01:15:01.241440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:15:01.241609 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 4 01:15:01.241626 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 01:15:01.241730 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 4 01:15:01.241743 kernel: ata3.00: applying bridge limits Mar 4 01:15:01.241753 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 01:15:01.214118 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 01:15:01.248975 disk-uuid[554]: Primary Header is updated. Mar 4 01:15:01.248975 disk-uuid[554]: Secondary Entries is updated. Mar 4 01:15:01.248975 disk-uuid[554]: Secondary Header is updated. Mar 4 01:15:01.262670 kernel: ata3.00: configured for UDMA/100 Mar 4 01:15:01.262699 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 4 01:15:01.270716 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:15:01.325150 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 4 01:15:01.325437 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 4 01:15:01.340902 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 4 01:15:02.240889 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 01:15:02.241316 disk-uuid[556]: The operation has completed successfully. Mar 4 01:15:02.272577 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 01:15:02.272919 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 01:15:02.320153 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 01:15:02.329758 sh[595]: Success Mar 4 01:15:02.347904 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 4 01:15:02.399010 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 01:15:02.419646 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 01:15:02.422642 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 01:15:02.454875 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 01:15:02.454923 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:15:02.454991 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 01:15:02.458152 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 01:15:02.460463 kernel: BTRFS info (device dm-0): using free space tree Mar 4 01:15:02.471431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 01:15:02.472331 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 01:15:02.485038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 01:15:02.488977 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 01:15:02.509738 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:15:02.509769 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:15:02.509788 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:15:02.515089 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:15:02.526155 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 01:15:02.533196 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:15:02.538998 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 01:15:02.553143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 01:15:02.641051 ignition[699]: Ignition 2.19.0 Mar 4 01:15:02.641535 ignition[699]: Stage: fetch-offline Mar 4 01:15:02.641585 ignition[699]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:15:02.641597 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:15:02.641712 ignition[699]: parsed url from cmdline: "" Mar 4 01:15:02.641717 ignition[699]: no config URL provided Mar 4 01:15:02.641723 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 01:15:02.641732 ignition[699]: no config at "/usr/lib/ignition/user.ign" Mar 4 01:15:02.641759 ignition[699]: op(1): [started] loading QEMU firmware config module Mar 4 01:15:02.665056 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:15:02.641764 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 4 01:15:02.661076 ignition[699]: op(1): [finished] loading QEMU firmware config module Mar 4 01:15:02.661100 ignition[699]: QEMU firmware config was not found. Ignoring... Mar 4 01:15:02.691318 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:15:02.723597 systemd-networkd[783]: lo: Link UP Mar 4 01:15:02.723637 systemd-networkd[783]: lo: Gained carrier Mar 4 01:15:02.725795 systemd-networkd[783]: Enumeration completed Mar 4 01:15:02.726010 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:15:02.727680 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:15:02.727686 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:15:02.729102 systemd-networkd[783]: eth0: Link UP Mar 4 01:15:02.729106 systemd-networkd[783]: eth0: Gained carrier Mar 4 01:15:02.729113 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:15:02.731654 systemd[1]: Reached target network.target - Network. Mar 4 01:15:02.773003 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:15:02.918888 ignition[699]: parsing config with SHA512: 3fb0673f0a786e84ba7a92e32c9c0c9d6d835dc31572752491a8eb859b3586356faf21eb456589f8c4b413fd63539684a73d71f1760803de2944ca3d43af266d Mar 4 01:15:02.922806 unknown[699]: fetched base config from "system" Mar 4 01:15:02.922817 unknown[699]: fetched user config from "qemu" Mar 4 01:15:02.923571 ignition[699]: fetch-offline: fetch-offline passed Mar 4 01:15:02.925558 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:15:02.923647 ignition[699]: Ignition finished successfully Mar 4 01:15:02.929680 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 4 01:15:02.950089 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 01:15:02.966415 ignition[787]: Ignition 2.19.0 Mar 4 01:15:02.966443 ignition[787]: Stage: kargs Mar 4 01:15:02.966587 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:15:02.971126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 01:15:02.966598 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:15:02.967368 ignition[787]: kargs: kargs passed Mar 4 01:15:02.967407 ignition[787]: Ignition finished successfully Mar 4 01:15:02.991118 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 01:15:03.008922 ignition[796]: Ignition 2.19.0 Mar 4 01:15:03.008981 ignition[796]: Stage: disks Mar 4 01:15:03.011329 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 01:15:03.009176 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 4 01:15:03.016545 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 01:15:03.009188 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:15:03.022439 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 01:15:03.009927 ignition[796]: disks: disks passed Mar 4 01:15:03.026087 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:15:03.010003 ignition[796]: Ignition finished successfully Mar 4 01:15:03.031537 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:15:03.034574 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:15:03.053110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 01:15:03.070088 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 4 01:15:03.073924 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 01:15:03.080592 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 01:15:03.193004 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 01:15:03.193690 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 01:15:03.196815 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 01:15:03.212998 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:15:03.237062 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Mar 4 01:15:03.237099 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:15:03.237111 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:15:03.237121 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:15:03.217795 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 01:15:03.256999 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:15:03.237243 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 01:15:03.237287 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 01:15:03.237313 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:15:03.245689 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 01:15:03.246706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:15:03.249089 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 01:15:03.298404 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 01:15:03.304496 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 4 01:15:03.309367 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 01:15:03.314896 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 01:15:03.428650 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 01:15:03.442187 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 01:15:03.448032 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 01:15:03.460717 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:15:03.454533 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 01:15:03.484922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 01:15:03.492813 ignition[927]: INFO : Ignition 2.19.0 Mar 4 01:15:03.492813 ignition[927]: INFO : Stage: mount Mar 4 01:15:03.497115 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:15:03.497115 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:15:03.497115 ignition[927]: INFO : mount: mount passed Mar 4 01:15:03.497115 ignition[927]: INFO : Ignition finished successfully Mar 4 01:15:03.501201 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 01:15:03.520030 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 01:15:03.534922 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 01:15:03.554897 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 4 01:15:03.554932 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 01:15:03.560485 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 01:15:03.560520 kernel: BTRFS info (device vda6): using free space tree Mar 4 01:15:03.568079 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 01:15:03.570678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 01:15:03.604635 ignition[957]: INFO : Ignition 2.19.0 Mar 4 01:15:03.604635 ignition[957]: INFO : Stage: files Mar 4 01:15:03.609500 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:15:03.609500 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:15:03.616808 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 4 01:15:03.620920 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 01:15:03.620920 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 01:15:03.629548 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 01:15:03.633479 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 01:15:03.633479 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 01:15:03.630644 unknown[957]: wrote ssh authorized keys file for user: core Mar 4 01:15:03.645039 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:15:03.645039 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 01:15:03.706259 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 01:15:03.796639 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 01:15:03.796639 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 01:15:03.807450 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 4 01:15:03.816104 systemd-networkd[783]: eth0: Gained IPv6LL Mar 4 01:15:03.941342 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 01:15:04.227265 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 01:15:04.227265 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:15:04.240128 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 4 01:15:04.486162 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 01:15:05.227150 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 01:15:05.227150 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 4 01:15:05.238771 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 4 01:15:05.245788 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 4 01:15:05.289802 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:15:05.289802 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 4 01:15:05.289802 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 4 01:15:05.289802 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 4 01:15:05.289802 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 01:15:05.289802 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:15:05.289802 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 01:15:05.289802 ignition[957]: INFO : files: files passed Mar 4 01:15:05.289802 ignition[957]: INFO : Ignition finished successfully Mar 4 01:15:05.288220 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 01:15:05.311310 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 01:15:05.320149 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 01:15:05.328358 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 01:15:05.391568 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Mar 4 01:15:05.328491 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 01:15:05.402413 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:15:05.402413 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:15:05.344664 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:15:05.436358 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 01:15:05.355184 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 01:15:05.364461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 01:15:05.403795 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 01:15:05.404030 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 01:15:05.414943 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 01:15:05.425808 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 01:15:05.430452 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 01:15:05.449354 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 01:15:05.462492 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:15:05.469579 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 01:15:05.490543 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:15:05.494473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:15:05.501551 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 01:15:05.507890 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 01:15:05.508081 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 01:15:05.515398 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 01:15:05.521713 systemd[1]: Stopped target basic.target - Basic System. Mar 4 01:15:05.527810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 01:15:05.533907 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 01:15:05.540251 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 01:15:05.547112 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 01:15:05.553547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 01:15:05.558212 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 01:15:05.564805 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 01:15:05.571697 systemd[1]: Stopped target swap.target - Swaps. Mar 4 01:15:05.578254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 01:15:05.578498 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 01:15:05.586017 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:15:05.591454 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:15:05.598668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 01:15:05.599049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:15:05.605432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 01:15:05.605573 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 01:15:05.612669 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 01:15:05.612882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 01:15:05.617564 systemd[1]: Stopped target paths.target - Path Units. Mar 4 01:15:05.623035 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 01:15:05.627003 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:15:05.633064 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 01:15:05.639171 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 01:15:05.645058 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 01:15:05.645193 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 01:15:05.652292 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 01:15:05.714095 ignition[1012]: INFO : Ignition 2.19.0 Mar 4 01:15:05.714095 ignition[1012]: INFO : Stage: umount Mar 4 01:15:05.714095 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 01:15:05.714095 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 4 01:15:05.714095 ignition[1012]: INFO : umount: umount passed Mar 4 01:15:05.714095 ignition[1012]: INFO : Ignition finished successfully Mar 4 01:15:05.652475 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 01:15:05.658154 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 01:15:05.658304 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 01:15:05.665031 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 01:15:05.665176 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 01:15:05.686144 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 01:15:05.691250 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 01:15:05.695729 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 01:15:05.696141 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:15:05.702081 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 01:15:05.702353 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 01:15:05.710696 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 01:15:05.710884 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 01:15:05.715760 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 01:15:05.715953 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 01:15:05.721736 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 01:15:05.724709 systemd[1]: Stopped target network.target - Network. Mar 4 01:15:05.729346 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 01:15:05.729404 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 01:15:05.736062 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 01:15:05.736113 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 01:15:05.741687 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 01:15:05.741738 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 01:15:05.747715 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 01:15:05.747768 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 01:15:05.755292 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 01:15:05.763435 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 01:15:05.771933 systemd-networkd[783]: eth0: DHCPv6 lease lost Mar 4 01:15:05.775414 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 01:15:05.775626 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 01:15:05.810105 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 01:15:05.810290 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 01:15:05.820000 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 01:15:05.820154 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 01:15:05.825608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 01:15:05.825660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:15:05.830171 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 01:15:05.830228 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 01:15:05.861163 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 01:15:05.862670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 01:15:05.862739 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 01:15:05.872096 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:15:05.872181 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:15:05.878523 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 01:15:05.878589 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 01:15:05.885225 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 01:15:05.885289 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:15:05.892790 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:15:05.918800 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 01:15:05.919158 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 01:15:05.924300 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 01:15:05.924662 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:15:05.926433 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 01:15:05.926536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 01:15:05.936236 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 01:15:05.936278 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:15:05.942158 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 01:15:05.942208 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 01:15:05.958153 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 01:15:05.958234 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 01:15:05.970125 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 01:15:05.970197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 01:15:05.995299 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 01:15:06.001036 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 01:15:06.001131 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:15:06.008331 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 01:15:06.008390 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:15:06.015270 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 01:15:06.015319 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:15:06.019268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 01:15:06.019318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:15:06.023538 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 01:15:06.023711 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 01:15:06.030485 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 01:15:06.108878 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 4 01:15:06.054074 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 01:15:06.062941 systemd[1]: Switching root. Mar 4 01:15:06.115702 systemd-journald[194]: Journal stopped Mar 4 01:15:07.465757 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 01:15:07.465933 kernel: SELinux: policy capability open_perms=1 Mar 4 01:15:07.465957 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 01:15:07.466037 kernel: SELinux: policy capability always_check_network=0 Mar 4 01:15:07.466058 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 01:15:07.466075 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 01:15:07.466093 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 01:15:07.466109 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 01:15:07.466126 kernel: audit: type=1403 audit(1772586906.299:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 01:15:07.466155 systemd[1]: Successfully loaded SELinux policy in 58.307ms. Mar 4 01:15:07.466184 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.766ms. Mar 4 01:15:07.466203 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 01:15:07.466225 systemd[1]: Detected virtualization kvm. Mar 4 01:15:07.466244 systemd[1]: Detected architecture x86-64. Mar 4 01:15:07.466261 systemd[1]: Detected first boot. Mar 4 01:15:07.466279 systemd[1]: Initializing machine ID from VM UUID. Mar 4 01:15:07.466297 zram_generator::config[1055]: No configuration found. Mar 4 01:15:07.466315 systemd[1]: Populated /etc with preset unit settings. Mar 4 01:15:07.466343 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 01:15:07.466362 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 01:15:07.466380 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 01:15:07.466398 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 01:15:07.466422 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 01:15:07.466438 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 01:15:07.466457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 01:15:07.466476 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 01:15:07.466501 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 01:15:07.466522 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 01:15:07.466541 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 01:15:07.466559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 01:15:07.466577 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 01:15:07.466596 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 01:15:07.466613 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 01:15:07.466634 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 01:15:07.466660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 01:15:07.466684 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 01:15:07.466701 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 01:15:07.466719 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 01:15:07.466735 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 01:15:07.466753 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 01:15:07.466771 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 01:15:07.466789 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 01:15:07.466807 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 01:15:07.466914 systemd[1]: Reached target slices.target - Slice Units. Mar 4 01:15:07.466939 systemd[1]: Reached target swap.target - Swaps. Mar 4 01:15:07.466958 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 01:15:07.466976 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 01:15:07.467050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 01:15:07.467070 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 01:15:07.467089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 01:15:07.467108 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 01:15:07.467128 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 01:15:07.467153 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 01:15:07.467173 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 01:15:07.467194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:15:07.467212 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 01:15:07.467230 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 01:15:07.467248 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 01:15:07.467266 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 01:15:07.467284 systemd[1]: Reached target machines.target - Containers. Mar 4 01:15:07.467308 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 01:15:07.467327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:15:07.467346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 01:15:07.467364 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 01:15:07.467381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:15:07.467400 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:15:07.467420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:15:07.467438 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 01:15:07.467456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:15:07.467481 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 01:15:07.467499 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 01:15:07.467517 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 01:15:07.467536 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 01:15:07.467556 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 01:15:07.467576 kernel: fuse: init (API version 7.39) Mar 4 01:15:07.467597 kernel: loop: module loaded Mar 4 01:15:07.467615 kernel: ACPI: bus type drm_connector registered Mar 4 01:15:07.467634 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 01:15:07.467657 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 01:15:07.467676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 01:15:07.467721 systemd-journald[1139]: Collecting audit messages is disabled. Mar 4 01:15:07.467752 systemd-journald[1139]: Journal started Mar 4 01:15:07.467785 systemd-journald[1139]: Runtime Journal (/run/log/journal/2f43525aa0f1410ca791e86fa9a11aa9) is 6.0M, max 48.4M, 42.3M free. Mar 4 01:15:06.972929 systemd[1]: Queued start job for default target multi-user.target. Mar 4 01:15:06.997234 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 01:15:06.998015 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 01:15:06.998406 systemd[1]: systemd-journald.service: Consumed 1.637s CPU time. Mar 4 01:15:07.471975 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 01:15:07.483224 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 01:15:07.488132 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 01:15:07.488173 systemd[1]: Stopped verity-setup.service. Mar 4 01:15:07.498911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:15:07.506041 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 01:15:07.507291 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 01:15:07.510758 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 01:15:07.514554 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 01:15:07.518686 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 01:15:07.522432 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 01:15:07.526444 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 01:15:07.530246 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 01:15:07.534369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 01:15:07.538747 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 01:15:07.539041 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 01:15:07.543209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:15:07.543427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:15:07.547453 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:15:07.547672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:15:07.551467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:15:07.551677 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:15:07.555948 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 01:15:07.556184 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 01:15:07.560153 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:15:07.560359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:15:07.564201 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 01:15:07.568127 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 01:15:07.572790 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 01:15:07.586351 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 01:15:07.595326 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 01:15:07.607967 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 01:15:07.613028 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 01:15:07.616543 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 01:15:07.616598 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 01:15:07.620909 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 01:15:07.634072 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 01:15:07.639418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 01:15:07.642735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:15:07.645032 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 01:15:07.650942 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 01:15:07.655315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:15:07.657768 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 01:15:07.661333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:15:07.662166 systemd-journald[1139]: Time spent on flushing to /var/log/journal/2f43525aa0f1410ca791e86fa9a11aa9 is 21.557ms for 945 entries. Mar 4 01:15:07.662166 systemd-journald[1139]: System Journal (/var/log/journal/2f43525aa0f1410ca791e86fa9a11aa9) is 8.0M, max 195.6M, 187.6M free. Mar 4 01:15:07.692764 systemd-journald[1139]: Received client request to flush runtime journal. Mar 4 01:15:07.667271 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:15:07.672578 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 01:15:07.681556 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 01:15:07.687172 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 01:15:07.701505 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 01:15:07.710586 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 01:15:07.721299 kernel: loop0: detected capacity change from 0 to 142488 Mar 4 01:15:07.715207 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 01:15:07.722799 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 01:15:07.725136 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 4 01:15:07.725154 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 4 01:15:07.727577 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 01:15:07.733252 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:15:07.738027 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 01:15:07.749678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 01:15:07.756897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 01:15:07.762222 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 01:15:07.773521 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 01:15:07.779606 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 01:15:07.791910 kernel: loop1: detected capacity change from 0 to 140768 Mar 4 01:15:07.796076 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 01:15:07.797218 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 01:15:07.824804 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 01:15:07.837344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 01:15:07.856891 kernel: loop2: detected capacity change from 0 to 217752 Mar 4 01:15:07.867911 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 4 01:15:07.868335 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 4 01:15:07.875698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 01:15:07.904909 kernel: loop3: detected capacity change from 0 to 142488 Mar 4 01:15:07.925079 kernel: loop4: detected capacity change from 0 to 140768 Mar 4 01:15:07.946881 kernel: loop5: detected capacity change from 0 to 217752 Mar 4 01:15:07.960808 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 4 01:15:07.961630 (sd-merge)[1197]: Merged extensions into '/usr'. Mar 4 01:15:07.966228 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 01:15:07.966414 systemd[1]: Reloading... Mar 4 01:15:08.045045 zram_generator::config[1223]: No configuration found. Mar 4 01:15:08.076972 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 01:15:08.165954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:15:08.211778 systemd[1]: Reloading finished in 244 ms. Mar 4 01:15:08.247815 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 01:15:08.252115 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 01:15:08.257245 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 01:15:08.283306 systemd[1]: Starting ensure-sysext.service... Mar 4 01:15:08.288185 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 01:15:08.293960 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 01:15:08.299639 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 4 01:15:08.299689 systemd[1]: Reloading... Mar 4 01:15:08.325327 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 01:15:08.325973 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 01:15:08.327439 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 01:15:08.327909 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 4 01:15:08.328113 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 4 01:15:08.328788 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Mar 4 01:15:08.334068 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:15:08.334189 systemd-tmpfiles[1262]: Skipping /boot Mar 4 01:15:08.358396 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 01:15:08.359081 systemd-tmpfiles[1262]: Skipping /boot Mar 4 01:15:08.380060 zram_generator::config[1292]: No configuration found. Mar 4 01:15:08.471936 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1312) Mar 4 01:15:08.485885 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 4 01:15:08.492920 kernel: ACPI: button: Power Button [PWRF] Mar 4 01:15:08.525420 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 01:15:08.525691 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 01:15:08.525949 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 01:15:08.523610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:15:08.590285 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 01:15:08.591084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 01:15:08.597518 systemd[1]: Reloading finished in 297 ms. Mar 4 01:15:08.679974 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 4 01:15:08.692346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 01:15:08.696385 kernel: kvm_amd: TSC scaling supported Mar 4 01:15:08.696421 kernel: kvm_amd: Nested Virtualization enabled Mar 4 01:15:08.696452 kernel: kvm_amd: Nested Paging enabled Mar 4 01:15:08.699819 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 4 01:15:08.699890 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 01:15:08.699913 kernel: kvm_amd: PMU virtualization is disabled Mar 4 01:15:08.750939 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 01:15:08.763532 kernel: EDAC MC: Ver: 3.0.0 Mar 4 01:15:08.776775 systemd[1]: Finished ensure-sysext.service. Mar 4 01:15:08.784128 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 01:15:08.804501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:15:08.821217 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:15:08.826938 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 01:15:08.831818 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 01:15:08.834178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 01:15:08.841342 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 01:15:08.848381 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 01:15:08.851058 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:15:08.855104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 01:15:08.861127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 01:15:08.865070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 01:15:08.866224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 01:15:08.878798 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 01:15:08.882322 augenrules[1382]: No rules Mar 4 01:15:08.886169 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 01:15:08.888781 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 01:15:08.894104 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 01:15:08.903260 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 01:15:08.910190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 01:15:08.915110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 01:15:08.916636 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:15:08.921936 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 01:15:08.927805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 01:15:08.928124 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 01:15:08.933534 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 01:15:08.933762 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 01:15:08.938299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 01:15:08.938459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 01:15:08.944933 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 01:15:08.950162 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 01:15:08.950392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 01:15:08.951611 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 01:15:08.953348 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 01:15:08.961208 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 01:15:08.975232 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 01:15:08.975366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 01:15:08.975462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 01:15:08.985214 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 01:15:08.979158 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 01:15:08.985303 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 01:15:08.985404 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 01:15:08.991093 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 01:15:08.996229 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 01:15:09.016138 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 01:15:09.102413 systemd-networkd[1388]: lo: Link UP Mar 4 01:15:09.102422 systemd-networkd[1388]: lo: Gained carrier Mar 4 01:15:09.104689 systemd-networkd[1388]: Enumeration completed Mar 4 01:15:09.105424 systemd-resolved[1390]: Positive Trust Anchors: Mar 4 01:15:09.105463 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 01:15:09.105489 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 01:15:09.106139 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:15:09.106201 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 01:15:09.107339 systemd-networkd[1388]: eth0: Link UP Mar 4 01:15:09.107410 systemd-networkd[1388]: eth0: Gained carrier Mar 4 01:15:09.107456 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 01:15:09.109285 systemd-resolved[1390]: Defaulting to hostname 'linux'. Mar 4 01:15:09.120897 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.125/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 4 01:15:09.121721 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Mar 4 01:15:10.523554 systemd-resolved[1390]: Clock change detected. Flushing caches. Mar 4 01:15:10.523591 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 4 01:15:10.523646 systemd-timesyncd[1391]: Initial clock synchronization to Wed 2026-03-04 01:15:10.523482 UTC. Mar 4 01:15:10.570980 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 01:15:10.578041 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 01:15:10.582085 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 01:15:10.585690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 01:15:10.589548 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 01:15:10.595239 systemd[1]: Reached target network.target - Network. Mar 4 01:15:10.598041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 01:15:10.601622 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 01:15:10.604994 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 01:15:10.609048 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 01:15:10.613112 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 01:15:10.617081 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 01:15:10.617113 systemd[1]: Reached target paths.target - Path Units. Mar 4 01:15:10.620996 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 01:15:10.625299 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 01:15:10.628843 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 01:15:10.632648 systemd[1]: Reached target timers.target - Timer Units. Mar 4 01:15:10.636459 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 01:15:10.641502 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 01:15:10.655263 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 01:15:10.660208 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 01:15:10.664596 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 01:15:10.668121 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 01:15:10.672087 systemd[1]: Reached target basic.target - Basic System. Mar 4 01:15:10.674968 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:15:10.674998 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 01:15:10.676144 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 01:15:10.681044 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 01:15:10.685393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 01:15:10.690059 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 01:15:10.695079 jq[1427]: false Mar 4 01:15:10.695131 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 01:15:10.696417 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 01:15:10.704011 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 01:15:10.709394 dbus-daemon[1426]: [system] SELinux support is enabled Mar 4 01:15:10.710694 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 01:15:10.715584 extend-filesystems[1428]: Found loop3 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found loop4 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found loop5 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found sr0 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda1 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda2 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda3 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found usr Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda4 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda6 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda7 Mar 4 01:15:10.715584 extend-filesystems[1428]: Found vda9 Mar 4 01:15:10.715584 extend-filesystems[1428]: Checking size of /dev/vda9 Mar 4 01:15:10.867887 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 4 01:15:10.867992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1317) Mar 4 01:15:10.868015 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 4 01:15:10.868148 extend-filesystems[1428]: Resized partition /dev/vda9 Mar 4 01:15:10.719137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 01:15:10.876713 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Mar 4 01:15:10.876713 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 01:15:10.876713 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 4 01:15:10.876713 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 4 01:15:10.728685 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 01:15:10.907655 extend-filesystems[1428]: Resized filesystem in /dev/vda9 Mar 4 01:15:10.729265 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 01:15:10.729660 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 01:15:10.911460 update_engine[1446]: I20260304 01:15:10.813477 1446 main.cc:92] Flatcar Update Engine starting Mar 4 01:15:10.911460 update_engine[1446]: I20260304 01:15:10.817709 1446 update_check_scheduler.cc:74] Next update check in 7m26s Mar 4 01:15:10.730518 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 01:15:10.911969 jq[1447]: true Mar 4 01:15:10.734879 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 01:15:10.735979 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 01:15:10.912371 tar[1449]: linux-amd64/LICENSE Mar 4 01:15:10.912371 tar[1449]: linux-amd64/helm Mar 4 01:15:10.742332 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 01:15:10.912674 jq[1450]: true Mar 4 01:15:10.742532 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 01:15:10.743076 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 01:15:10.743262 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 01:15:10.914121 bash[1474]: Updated "/home/core/.ssh/authorized_keys" Mar 4 01:15:10.746266 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 01:15:10.746442 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 01:15:10.757698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 01:15:10.757734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 01:15:10.757831 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 01:15:10.766441 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 01:15:10.766464 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 01:15:10.817883 systemd[1]: Started update-engine.service - Update Engine. Mar 4 01:15:10.830021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 01:15:10.868265 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Mar 4 01:15:10.868292 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 01:15:10.869391 systemd-logind[1443]: New seat seat0. Mar 4 01:15:10.871641 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 01:15:10.872006 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 01:15:10.894827 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 01:15:10.910650 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 01:15:10.924715 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 4 01:15:10.928968 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 01:15:11.005364 containerd[1451]: time="2026-03-04T01:15:11.005229725Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 01:15:11.011529 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 01:15:11.026050 containerd[1451]: time="2026-03-04T01:15:11.025997933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.028571 containerd[1451]: time="2026-03-04T01:15:11.028537099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.028630163Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.028654518Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.028953406Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.028976529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029055927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029071827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029278482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029297508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029313599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029326463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029431488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.029811 containerd[1451]: time="2026-03-04T01:15:11.029697865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 01:15:11.030063 containerd[1451]: time="2026-03-04T01:15:11.029906685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 01:15:11.030063 containerd[1451]: time="2026-03-04T01:15:11.029966727Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 01:15:11.030128 containerd[1451]: time="2026-03-04T01:15:11.030089827Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 01:15:11.030235 containerd[1451]: time="2026-03-04T01:15:11.030181839Z" level=info msg="metadata content store policy set" policy=shared Mar 4 01:15:11.036518 containerd[1451]: time="2026-03-04T01:15:11.036383735Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 01:15:11.036518 containerd[1451]: time="2026-03-04T01:15:11.036430523Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 01:15:11.036518 containerd[1451]: time="2026-03-04T01:15:11.036450329Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 01:15:11.036518 containerd[1451]: time="2026-03-04T01:15:11.036467952Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 01:15:11.036518 containerd[1451]: time="2026-03-04T01:15:11.036494752Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 01:15:11.036652 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 01:15:11.036987 containerd[1451]: time="2026-03-04T01:15:11.036894028Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 01:15:11.037485 containerd[1451]: time="2026-03-04T01:15:11.037429016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 01:15:11.037635 containerd[1451]: time="2026-03-04T01:15:11.037589114Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 01:15:11.037659 containerd[1451]: time="2026-03-04T01:15:11.037635802Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 01:15:11.037659 containerd[1451]: time="2026-03-04T01:15:11.037653715Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 01:15:11.037701 containerd[1451]: time="2026-03-04T01:15:11.037671018Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037701 containerd[1451]: time="2026-03-04T01:15:11.037686827Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037732 containerd[1451]: time="2026-03-04T01:15:11.037702116Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037732 containerd[1451]: time="2026-03-04T01:15:11.037718276Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037829 containerd[1451]: time="2026-03-04T01:15:11.037801952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037829 containerd[1451]: time="2026-03-04T01:15:11.037821418Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037861 containerd[1451]: time="2026-03-04T01:15:11.037837078Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037861 containerd[1451]: time="2026-03-04T01:15:11.037852376Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 01:15:11.037901 containerd[1451]: time="2026-03-04T01:15:11.037875680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.037901 containerd[1451]: time="2026-03-04T01:15:11.037892091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.037978 containerd[1451]: time="2026-03-04T01:15:11.037907760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.037978 containerd[1451]: time="2026-03-04T01:15:11.037960268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038016 containerd[1451]: time="2026-03-04T01:15:11.037977410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038016 containerd[1451]: time="2026-03-04T01:15:11.037994001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038016 containerd[1451]: time="2026-03-04T01:15:11.038008688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038059 containerd[1451]: time="2026-03-04T01:15:11.038024527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038059 containerd[1451]: time="2026-03-04T01:15:11.038040357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038093 containerd[1451]: time="2026-03-04T01:15:11.038072958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038093 containerd[1451]: time="2026-03-04T01:15:11.038088046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038124 containerd[1451]: time="2026-03-04T01:15:11.038103525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038124 containerd[1451]: time="2026-03-04T01:15:11.038119304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038154 containerd[1451]: time="2026-03-04T01:15:11.038144281Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 01:15:11.038206 containerd[1451]: time="2026-03-04T01:15:11.038167424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038234 containerd[1451]: time="2026-03-04T01:15:11.038215534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038257 containerd[1451]: time="2026-03-04T01:15:11.038231043Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 01:15:11.038313 containerd[1451]: time="2026-03-04T01:15:11.038280765Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 01:15:11.038334 containerd[1451]: time="2026-03-04T01:15:11.038320800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 01:15:11.038352 containerd[1451]: time="2026-03-04T01:15:11.038335217Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 01:15:11.038369 containerd[1451]: time="2026-03-04T01:15:11.038350896Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 01:15:11.038369 containerd[1451]: time="2026-03-04T01:15:11.038363270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038405 containerd[1451]: time="2026-03-04T01:15:11.038378498Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 01:15:11.038405 containerd[1451]: time="2026-03-04T01:15:11.038391032Z" level=info msg="NRI interface is disabled by configuration." Mar 4 01:15:11.038436 containerd[1451]: time="2026-03-04T01:15:11.038404727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 01:15:11.038823 containerd[1451]: time="2026-03-04T01:15:11.038690480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 01:15:11.039143 containerd[1451]: time="2026-03-04T01:15:11.039085879Z" level=info msg="Connect containerd service" Mar 4 01:15:11.039186 containerd[1451]: time="2026-03-04T01:15:11.039166509Z" level=info msg="using legacy CRI server" Mar 4 01:15:11.039186 containerd[1451]: time="2026-03-04T01:15:11.039177119Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 01:15:11.039325 containerd[1451]: time="2026-03-04T01:15:11.039273759Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 01:15:11.040115 containerd[1451]: time="2026-03-04T01:15:11.040072200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:15:11.040397 containerd[1451]: time="2026-03-04T01:15:11.040361528Z" level=info msg="Start subscribing containerd event" Mar 4 01:15:11.040659 containerd[1451]: time="2026-03-04T01:15:11.040459242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 01:15:11.040659 containerd[1451]: time="2026-03-04T01:15:11.040484848Z" level=info msg="Start recovering state" Mar 4 01:15:11.040870 containerd[1451]: time="2026-03-04T01:15:11.040526798Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 01:15:11.040901 containerd[1451]: time="2026-03-04T01:15:11.040845191Z" level=info msg="Start event monitor" Mar 4 01:15:11.040901 containerd[1451]: time="2026-03-04T01:15:11.040884104Z" level=info msg="Start snapshots syncer" Mar 4 01:15:11.040901 containerd[1451]: time="2026-03-04T01:15:11.040892770Z" level=info msg="Start cni network conf syncer for default" Mar 4 01:15:11.040901 containerd[1451]: time="2026-03-04T01:15:11.040899943Z" level=info msg="Start streaming server" Mar 4 01:15:11.043520 containerd[1451]: time="2026-03-04T01:15:11.043494297Z" level=info msg="containerd successfully booted in 0.039380s" Mar 4 01:15:11.046161 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 01:15:11.049225 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 01:15:11.072066 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 01:15:11.072355 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 01:15:11.096200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 01:15:11.109670 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 01:15:11.126264 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 01:15:11.131075 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 01:15:11.134600 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 01:15:11.283225 tar[1449]: linux-amd64/README.md Mar 4 01:15:11.302436 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 01:15:12.512603 systemd-networkd[1388]: eth0: Gained IPv6LL Mar 4 01:15:12.518795 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 01:15:12.526309 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 01:15:12.554399 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 4 01:15:12.562352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:12.569169 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 01:15:12.598200 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 4 01:15:12.598432 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 4 01:15:12.604199 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 01:15:12.610660 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 01:15:13.389380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:13.396582 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 01:15:13.397121 (kubelet)[1537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:15:13.402494 systemd[1]: Startup finished in 1.472s (kernel) + 7.425s (initrd) + 5.754s (userspace) = 14.651s. Mar 4 01:15:13.857437 kubelet[1537]: E0304 01:15:13.856562 1537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:15:13.861058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:15:13.861278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:15:13.861689 systemd[1]: kubelet.service: Consumed 1.031s CPU time. Mar 4 01:15:15.205122 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 01:15:15.206864 systemd[1]: Started sshd@0-10.0.0.125:22-10.0.0.1:57416.service - OpenSSH per-connection server daemon (10.0.0.1:57416). Mar 4 01:15:15.264807 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 57416 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:15.267159 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:15.279640 systemd-logind[1443]: New session 1 of user core. Mar 4 01:15:15.281448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 01:15:15.289187 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 01:15:15.303615 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 01:15:15.313219 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 01:15:15.316617 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 01:15:15.412485 systemd[1554]: Queued start job for default target default.target. Mar 4 01:15:15.424282 systemd[1554]: Created slice app.slice - User Application Slice. Mar 4 01:15:15.424337 systemd[1554]: Reached target paths.target - Paths. Mar 4 01:15:15.424350 systemd[1554]: Reached target timers.target - Timers. Mar 4 01:15:15.426339 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 01:15:15.443458 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 01:15:15.443678 systemd[1554]: Reached target sockets.target - Sockets. Mar 4 01:15:15.443734 systemd[1554]: Reached target basic.target - Basic System. Mar 4 01:15:15.443864 systemd[1554]: Reached target default.target - Main User Target. Mar 4 01:15:15.443905 systemd[1554]: Startup finished in 119ms. Mar 4 01:15:15.444307 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 01:15:15.446420 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 01:15:15.522150 systemd[1]: Started sshd@1-10.0.0.125:22-10.0.0.1:57422.service - OpenSSH per-connection server daemon (10.0.0.1:57422). Mar 4 01:15:15.572151 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 57422 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:15.574233 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:15.580579 systemd-logind[1443]: New session 2 of user core. Mar 4 01:15:15.605055 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 01:15:15.666206 sshd[1565]: pam_unix(sshd:session): session closed for user core Mar 4 01:15:15.678447 systemd[1]: sshd@1-10.0.0.125:22-10.0.0.1:57422.service: Deactivated successfully. Mar 4 01:15:15.680123 systemd[1]: session-2.scope: Deactivated successfully. Mar 4 01:15:15.681891 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Mar 4 01:15:15.683283 systemd[1]: Started sshd@2-10.0.0.125:22-10.0.0.1:57424.service - OpenSSH per-connection server daemon (10.0.0.1:57424). Mar 4 01:15:15.684199 systemd-logind[1443]: Removed session 2. Mar 4 01:15:15.716188 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 57424 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:15.717579 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:15.722592 systemd-logind[1443]: New session 3 of user core. Mar 4 01:15:15.732011 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 01:15:15.784051 sshd[1572]: pam_unix(sshd:session): session closed for user core Mar 4 01:15:15.794647 systemd[1]: sshd@2-10.0.0.125:22-10.0.0.1:57424.service: Deactivated successfully. Mar 4 01:15:15.796376 systemd[1]: session-3.scope: Deactivated successfully. Mar 4 01:15:15.798006 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Mar 4 01:15:15.807098 systemd[1]: Started sshd@3-10.0.0.125:22-10.0.0.1:57432.service - OpenSSH per-connection server daemon (10.0.0.1:57432). Mar 4 01:15:15.808128 systemd-logind[1443]: Removed session 3. Mar 4 01:15:15.838101 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 57432 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:15.839865 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:15.845413 systemd-logind[1443]: New session 4 of user core. Mar 4 01:15:15.853025 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 01:15:15.913277 sshd[1579]: pam_unix(sshd:session): session closed for user core Mar 4 01:15:15.922008 systemd[1]: sshd@3-10.0.0.125:22-10.0.0.1:57432.service: Deactivated successfully. Mar 4 01:15:15.923680 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 01:15:15.925713 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Mar 4 01:15:15.934200 systemd[1]: Started sshd@4-10.0.0.125:22-10.0.0.1:57438.service - OpenSSH per-connection server daemon (10.0.0.1:57438). Mar 4 01:15:15.935471 systemd-logind[1443]: Removed session 4. Mar 4 01:15:15.966853 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 57438 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:15.968854 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:15.975596 systemd-logind[1443]: New session 5 of user core. Mar 4 01:15:15.985049 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 01:15:16.054325 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 01:15:16.054732 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:15:16.073037 sudo[1589]: pam_unix(sudo:session): session closed for user root Mar 4 01:15:16.075469 sshd[1586]: pam_unix(sshd:session): session closed for user core Mar 4 01:15:16.084817 systemd[1]: sshd@4-10.0.0.125:22-10.0.0.1:57438.service: Deactivated successfully. Mar 4 01:15:16.086519 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 01:15:16.088215 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Mar 4 01:15:16.089678 systemd[1]: Started sshd@5-10.0.0.125:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Mar 4 01:15:16.091179 systemd-logind[1443]: Removed session 5. Mar 4 01:15:16.127586 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:16.129294 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:16.134911 systemd-logind[1443]: New session 6 of user core. Mar 4 01:15:16.145033 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 01:15:16.204108 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 01:15:16.204557 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:15:16.210308 sudo[1598]: pam_unix(sudo:session): session closed for user root Mar 4 01:15:16.219273 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 01:15:16.219700 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:15:16.242092 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 01:15:16.244652 auditctl[1601]: No rules Mar 4 01:15:16.245159 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 01:15:16.245396 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 01:15:16.248335 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 01:15:16.288395 augenrules[1619]: No rules Mar 4 01:15:16.289550 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 01:15:16.290886 sudo[1597]: pam_unix(sudo:session): session closed for user root Mar 4 01:15:16.293468 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 4 01:15:16.300618 systemd[1]: sshd@5-10.0.0.125:22-10.0.0.1:57442.service: Deactivated successfully. Mar 4 01:15:16.302441 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 01:15:16.304150 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Mar 4 01:15:16.318298 systemd[1]: Started sshd@6-10.0.0.125:22-10.0.0.1:57444.service - OpenSSH per-connection server daemon (10.0.0.1:57444). Mar 4 01:15:16.320042 systemd-logind[1443]: Removed session 6. Mar 4 01:15:16.347847 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 57444 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:15:16.349337 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:15:16.354433 systemd-logind[1443]: New session 7 of user core. Mar 4 01:15:16.369993 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 01:15:16.426255 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 01:15:16.426624 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 01:15:16.728106 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 01:15:16.728282 (dockerd)[1649]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 01:15:17.128706 dockerd[1649]: time="2026-03-04T01:15:17.128286886Z" level=info msg="Starting up" Mar 4 01:15:17.426010 dockerd[1649]: time="2026-03-04T01:15:17.425659829Z" level=info msg="Loading containers: start." Mar 4 01:15:17.609818 kernel: Initializing XFRM netlink socket Mar 4 01:15:17.740271 systemd-networkd[1388]: docker0: Link UP Mar 4 01:15:17.771277 dockerd[1649]: time="2026-03-04T01:15:17.771158462Z" level=info msg="Loading containers: done." Mar 4 01:15:17.793299 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2714706569-merged.mount: Deactivated successfully. Mar 4 01:15:17.795660 dockerd[1649]: time="2026-03-04T01:15:17.795568033Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 01:15:17.795845 dockerd[1649]: time="2026-03-04T01:15:17.795716741Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 01:15:17.795905 dockerd[1649]: time="2026-03-04T01:15:17.795892048Z" level=info msg="Daemon has completed initialization" Mar 4 01:15:17.861115 dockerd[1649]: time="2026-03-04T01:15:17.860884062Z" level=info msg="API listen on /run/docker.sock" Mar 4 01:15:17.861269 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 01:15:18.380310 containerd[1451]: time="2026-03-04T01:15:18.380193488Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 4 01:15:18.923467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501567540.mount: Deactivated successfully. Mar 4 01:15:20.673078 containerd[1451]: time="2026-03-04T01:15:20.672566011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:20.673563 containerd[1451]: time="2026-03-04T01:15:20.673160131Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 4 01:15:20.674840 containerd[1451]: time="2026-03-04T01:15:20.674719981Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:20.679109 containerd[1451]: time="2026-03-04T01:15:20.679045925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:20.680875 containerd[1451]: time="2026-03-04T01:15:20.680801049Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 2.300538382s" Mar 4 01:15:20.680875 containerd[1451]: time="2026-03-04T01:15:20.680855230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 4 01:15:20.682120 containerd[1451]: time="2026-03-04T01:15:20.682055209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 4 01:15:21.728963 kernel: hrtimer: interrupt took 11688207 ns Mar 4 01:15:22.714808 containerd[1451]: time="2026-03-04T01:15:22.714666390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:22.715598 containerd[1451]: time="2026-03-04T01:15:22.715405563Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 4 01:15:22.716834 containerd[1451]: time="2026-03-04T01:15:22.716709914Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:22.719862 containerd[1451]: time="2026-03-04T01:15:22.719820303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:22.720909 containerd[1451]: time="2026-03-04T01:15:22.720866899Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.038744574s" Mar 4 01:15:22.720946 containerd[1451]: time="2026-03-04T01:15:22.720916781Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 4 01:15:22.722057 containerd[1451]: time="2026-03-04T01:15:22.721859922Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 4 01:15:24.112405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 01:15:24.120144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:24.491971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:24.512327 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 01:15:24.632536 containerd[1451]: time="2026-03-04T01:15:24.632436635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:24.633302 containerd[1451]: time="2026-03-04T01:15:24.633264279Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 4 01:15:24.634486 containerd[1451]: time="2026-03-04T01:15:24.634446566Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:24.638582 containerd[1451]: time="2026-03-04T01:15:24.638509550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:24.639615 containerd[1451]: time="2026-03-04T01:15:24.639562966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 1.917674271s" Mar 4 01:15:24.639658 containerd[1451]: time="2026-03-04T01:15:24.639592060Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 4 01:15:24.642406 containerd[1451]: time="2026-03-04T01:15:24.642097066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 4 01:15:24.867867 kubelet[1872]: E0304 01:15:24.867617 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 01:15:24.874859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 01:15:24.875237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 01:15:26.541979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861587034.mount: Deactivated successfully. Mar 4 01:15:27.573193 containerd[1451]: time="2026-03-04T01:15:27.573101905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:27.573968 containerd[1451]: time="2026-03-04T01:15:27.573865312Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 4 01:15:27.575584 containerd[1451]: time="2026-03-04T01:15:27.575522484Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:27.579666 containerd[1451]: time="2026-03-04T01:15:27.579577793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:27.581035 containerd[1451]: time="2026-03-04T01:15:27.580914112Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 2.938725215s" Mar 4 01:15:27.581035 containerd[1451]: time="2026-03-04T01:15:27.580990565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 4 01:15:27.583205 containerd[1451]: time="2026-03-04T01:15:27.583154601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 4 01:15:28.059291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923473469.mount: Deactivated successfully. Mar 4 01:15:29.676280 containerd[1451]: time="2026-03-04T01:15:29.676138464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:29.677168 containerd[1451]: time="2026-03-04T01:15:29.677048225Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 4 01:15:29.678647 containerd[1451]: time="2026-03-04T01:15:29.678593031Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:29.682620 containerd[1451]: time="2026-03-04T01:15:29.682564969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:29.684321 containerd[1451]: time="2026-03-04T01:15:29.684269917Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.101066415s" Mar 4 01:15:29.684321 containerd[1451]: time="2026-03-04T01:15:29.684316974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 4 01:15:29.685160 containerd[1451]: time="2026-03-04T01:15:29.685017386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 01:15:30.077043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3094904187.mount: Deactivated successfully. Mar 4 01:15:30.083461 containerd[1451]: time="2026-03-04T01:15:30.083381265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:30.084438 containerd[1451]: time="2026-03-04T01:15:30.084327660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 4 01:15:30.085666 containerd[1451]: time="2026-03-04T01:15:30.085509465Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:30.088180 containerd[1451]: time="2026-03-04T01:15:30.088055608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:30.089948 containerd[1451]: time="2026-03-04T01:15:30.089887738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 404.726483ms" Mar 4 01:15:30.089948 containerd[1451]: time="2026-03-04T01:15:30.089943301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 4 01:15:30.090841 containerd[1451]: time="2026-03-04T01:15:30.090573758Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 4 01:15:30.549558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726548421.mount: Deactivated successfully. Mar 4 01:15:31.424424 containerd[1451]: time="2026-03-04T01:15:31.424304570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:31.425566 containerd[1451]: time="2026-03-04T01:15:31.425503267Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 4 01:15:31.427071 containerd[1451]: time="2026-03-04T01:15:31.427010941Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:31.431395 containerd[1451]: time="2026-03-04T01:15:31.431303874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:31.432716 containerd[1451]: time="2026-03-04T01:15:31.432615069Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.342010193s" Mar 4 01:15:31.432716 containerd[1451]: time="2026-03-04T01:15:31.432671865Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 4 01:15:32.670571 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:32.682060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:32.712441 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit session-7.scope)... Mar 4 01:15:32.712483 systemd[1]: Reloading... Mar 4 01:15:32.807868 zram_generator::config[2082]: No configuration found. Mar 4 01:15:32.914319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:15:32.988926 systemd[1]: Reloading finished in 275 ms. Mar 4 01:15:33.042891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:33.046564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:33.049002 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:15:33.049331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:33.059331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:33.215272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:33.221576 (kubelet)[2129]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:15:33.272193 kubelet[2129]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:15:33.548299 kubelet[2129]: I0304 01:15:33.548021 2129 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 4 01:15:33.548299 kubelet[2129]: I0304 01:15:33.548147 2129 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:15:33.548299 kubelet[2129]: I0304 01:15:33.548174 2129 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:15:33.548299 kubelet[2129]: I0304 01:15:33.548182 2129 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:15:33.548529 kubelet[2129]: I0304 01:15:33.548473 2129 server.go:951] "Client rotation is on, will bootstrap in background" Mar 4 01:15:33.587452 kubelet[2129]: E0304 01:15:33.587243 2129 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.125:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.125:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 01:15:33.590390 kubelet[2129]: I0304 01:15:33.590252 2129 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:15:33.593390 kubelet[2129]: E0304 01:15:33.593169 2129 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:15:33.593390 kubelet[2129]: I0304 01:15:33.593248 2129 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:15:33.601802 kubelet[2129]: I0304 01:15:33.601728 2129 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:15:33.602707 kubelet[2129]: I0304 01:15:33.602608 2129 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:15:33.602945 kubelet[2129]: I0304 01:15:33.602669 2129 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:15:33.602945 kubelet[2129]: I0304 01:15:33.602942 2129 topology_manager.go:143] "Creating topology manager with none policy" Mar 4 01:15:33.603089 kubelet[2129]: I0304 01:15:33.602952 2129 container_manager_linux.go:308] "Creating device plugin manager" Mar 4 01:15:33.603089 kubelet[2129]: I0304 01:15:33.603042 2129 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:15:33.605468 kubelet[2129]: I0304 01:15:33.605391 2129 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 4 01:15:33.605790 kubelet[2129]: I0304 01:15:33.605731 2129 kubelet.go:482] "Attempting to sync node with API server" Mar 4 01:15:33.605915 kubelet[2129]: I0304 01:15:33.605854 2129 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:15:33.605949 kubelet[2129]: I0304 01:15:33.605929 2129 kubelet.go:394] "Adding apiserver pod source" Mar 4 01:15:33.605949 kubelet[2129]: I0304 01:15:33.605944 2129 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:15:33.608875 kubelet[2129]: I0304 01:15:33.608216 2129 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:15:33.610401 kubelet[2129]: I0304 01:15:33.610286 2129 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:15:33.610401 kubelet[2129]: I0304 01:15:33.610357 2129 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:15:33.610576 kubelet[2129]: W0304 01:15:33.610491 2129 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 01:15:33.614927 kubelet[2129]: I0304 01:15:33.614882 2129 server.go:1257] "Started kubelet" Mar 4 01:15:33.615036 kubelet[2129]: I0304 01:15:33.614972 2129 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:15:33.615243 kubelet[2129]: I0304 01:15:33.615176 2129 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:15:33.617835 kubelet[2129]: I0304 01:15:33.615535 2129 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:15:33.617835 kubelet[2129]: I0304 01:15:33.616083 2129 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:15:33.617835 kubelet[2129]: I0304 01:15:33.617352 2129 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 4 01:15:33.618885 kubelet[2129]: I0304 01:15:33.618851 2129 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:15:33.622326 kubelet[2129]: I0304 01:15:33.622276 2129 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:15:33.623373 kubelet[2129]: I0304 01:15:33.623330 2129 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 4 01:15:33.623601 kubelet[2129]: E0304 01:15:33.623556 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:33.624333 kubelet[2129]: I0304 01:15:33.624246 2129 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:15:33.624333 kubelet[2129]: I0304 01:15:33.624313 2129 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:15:33.625965 kubelet[2129]: E0304 01:15:33.625055 2129 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="200ms" Mar 4 01:15:33.625965 kubelet[2129]: I0304 01:15:33.625845 2129 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:15:33.625965 kubelet[2129]: I0304 01:15:33.625936 2129 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:15:33.628256 kubelet[2129]: E0304 01:15:33.628210 2129 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:15:33.628502 kubelet[2129]: I0304 01:15:33.628374 2129 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:15:33.636512 kubelet[2129]: E0304 01:15:33.635308 2129 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.125:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.125:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18997e6caf97be1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-04 01:15:33.614525983 +0000 UTC m=+0.387701053,LastTimestamp:2026-03-04 01:15:33.614525983 +0000 UTC m=+0.387701053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 4 01:15:33.651946 kubelet[2129]: I0304 01:15:33.651858 2129 cpu_manager.go:225] "Starting" policy="none" Mar 4 01:15:33.651946 kubelet[2129]: I0304 01:15:33.651882 2129 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 4 01:15:33.651946 kubelet[2129]: I0304 01:15:33.651905 2129 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 4 01:15:33.655146 kubelet[2129]: I0304 01:15:33.655062 2129 policy_none.go:50] "Start" Mar 4 01:15:33.655146 kubelet[2129]: I0304 01:15:33.655153 2129 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:15:33.655293 kubelet[2129]: I0304 01:15:33.655171 2129 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:15:33.657822 kubelet[2129]: I0304 01:15:33.657730 2129 policy_none.go:44] "Start" Mar 4 01:15:33.664174 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 01:15:33.666014 kubelet[2129]: I0304 01:15:33.665866 2129 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:15:33.668235 kubelet[2129]: I0304 01:15:33.668130 2129 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:15:33.668235 kubelet[2129]: I0304 01:15:33.668183 2129 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 4 01:15:33.668235 kubelet[2129]: I0304 01:15:33.668225 2129 kubelet.go:2501] "Starting kubelet main sync loop" Mar 4 01:15:33.668354 kubelet[2129]: E0304 01:15:33.668288 2129 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:15:33.678624 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 01:15:33.683625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 01:15:33.702262 kubelet[2129]: E0304 01:15:33.702192 2129 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:15:33.702566 kubelet[2129]: I0304 01:15:33.702446 2129 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 4 01:15:33.702605 kubelet[2129]: I0304 01:15:33.702493 2129 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:15:33.703011 kubelet[2129]: I0304 01:15:33.702901 2129 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 4 01:15:33.703639 kubelet[2129]: E0304 01:15:33.703620 2129 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:15:33.703840 kubelet[2129]: E0304 01:15:33.703652 2129 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 4 01:15:33.782626 systemd[1]: Created slice kubepods-burstable-pod9d9d2d80db43d7ba140b0b747691ec45.slice - libcontainer container kubepods-burstable-pod9d9d2d80db43d7ba140b0b747691ec45.slice. Mar 4 01:15:33.798458 kubelet[2129]: E0304 01:15:33.798271 2129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:15:33.802227 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 4 01:15:33.804659 kubelet[2129]: I0304 01:15:33.804193 2129 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:15:33.804659 kubelet[2129]: E0304 01:15:33.804514 2129 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Mar 4 01:15:33.822922 kubelet[2129]: E0304 01:15:33.822846 2129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:15:33.825644 kubelet[2129]: E0304 01:15:33.825529 2129 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="400ms" Mar 4 01:15:33.827006 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 4 01:15:33.830059 kubelet[2129]: E0304 01:15:33.829932 2129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:15:33.925665 kubelet[2129]: I0304 01:15:33.925571 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:33.925665 kubelet[2129]: I0304 01:15:33.925631 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:33.925665 kubelet[2129]: I0304 01:15:33.925656 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d9d2d80db43d7ba140b0b747691ec45-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d9d2d80db43d7ba140b0b747691ec45\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:33.925916 kubelet[2129]: I0304 01:15:33.925684 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d9d2d80db43d7ba140b0b747691ec45-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d9d2d80db43d7ba140b0b747691ec45\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:33.925916 kubelet[2129]: I0304 01:15:33.925700 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:33.925916 kubelet[2129]: I0304 01:15:33.925714 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:33.925916 kubelet[2129]: I0304 01:15:33.925727 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:33.925916 kubelet[2129]: I0304 01:15:33.925739 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:33.926029 kubelet[2129]: I0304 01:15:33.925860 2129 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d9d2d80db43d7ba140b0b747691ec45-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d9d2d80db43d7ba140b0b747691ec45\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:34.007510 kubelet[2129]: I0304 01:15:34.007380 2129 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:15:34.008043 kubelet[2129]: E0304 01:15:34.007885 2129 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Mar 4 01:15:34.102301 kubelet[2129]: E0304 01:15:34.102080 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:34.103546 containerd[1451]: time="2026-03-04T01:15:34.103291919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d9d2d80db43d7ba140b0b747691ec45,Namespace:kube-system,Attempt:0,}" Mar 4 01:15:34.127382 kubelet[2129]: E0304 01:15:34.127277 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:34.128057 containerd[1451]: time="2026-03-04T01:15:34.127993254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 4 01:15:34.133034 kubelet[2129]: E0304 01:15:34.132927 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:34.133488 containerd[1451]: time="2026-03-04T01:15:34.133409615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 4 01:15:34.226335 kubelet[2129]: E0304 01:15:34.226140 2129 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.125:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.125:6443: connect: connection refused" interval="800ms" Mar 4 01:15:34.410929 kubelet[2129]: I0304 01:15:34.410712 2129 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:15:34.411416 kubelet[2129]: E0304 01:15:34.411313 2129 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.125:6443/api/v1/nodes\": dial tcp 10.0.0.125:6443: connect: connection refused" node="localhost" Mar 4 01:15:34.505009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528756455.mount: Deactivated successfully. Mar 4 01:15:34.512626 containerd[1451]: time="2026-03-04T01:15:34.512516732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:15:34.513593 containerd[1451]: time="2026-03-04T01:15:34.513564667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 4 01:15:34.517247 containerd[1451]: time="2026-03-04T01:15:34.517181599Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:15:34.518607 containerd[1451]: time="2026-03-04T01:15:34.518478748Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:15:34.519418 containerd[1451]: time="2026-03-04T01:15:34.519358760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:15:34.520471 containerd[1451]: time="2026-03-04T01:15:34.520422628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:15:34.521305 containerd[1451]: time="2026-03-04T01:15:34.521256823Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 01:15:34.523065 containerd[1451]: time="2026-03-04T01:15:34.522991171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 01:15:34.525239 containerd[1451]: time="2026-03-04T01:15:34.525161622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 391.655867ms" Mar 4 01:15:34.526004 containerd[1451]: time="2026-03-04T01:15:34.525967044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 422.603763ms" Mar 4 01:15:34.529373 containerd[1451]: time="2026-03-04T01:15:34.529303723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.242021ms" Mar 4 01:15:34.653660 containerd[1451]: time="2026-03-04T01:15:34.653571582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:15:34.653833 containerd[1451]: time="2026-03-04T01:15:34.653674605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:15:34.653833 containerd[1451]: time="2026-03-04T01:15:34.653721953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:34.657546 containerd[1451]: time="2026-03-04T01:15:34.654932392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:15:34.657546 containerd[1451]: time="2026-03-04T01:15:34.654987575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:15:34.657546 containerd[1451]: time="2026-03-04T01:15:34.654999418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:34.657546 containerd[1451]: time="2026-03-04T01:15:34.655076592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:34.657546 containerd[1451]: time="2026-03-04T01:15:34.655387241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:34.659518 containerd[1451]: time="2026-03-04T01:15:34.659257555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:15:34.659518 containerd[1451]: time="2026-03-04T01:15:34.659295817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:15:34.659518 containerd[1451]: time="2026-03-04T01:15:34.659313800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:34.659518 containerd[1451]: time="2026-03-04T01:15:34.659398258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:34.687944 systemd[1]: Started cri-containerd-8c5416020989ba925a997cec67bfbd08eb85fe5fc36cfe7eaf16527886957d7c.scope - libcontainer container 8c5416020989ba925a997cec67bfbd08eb85fe5fc36cfe7eaf16527886957d7c. Mar 4 01:15:34.693475 systemd[1]: Started cri-containerd-55150043943252713da2400a11f4e76fffaf7413278d4b262275c125e31d0cd9.scope - libcontainer container 55150043943252713da2400a11f4e76fffaf7413278d4b262275c125e31d0cd9. Mar 4 01:15:34.695578 systemd[1]: Started cri-containerd-dde7e84dede0b54c6109381643324ff8391d15643b260b7d480dcb74fd178f71.scope - libcontainer container dde7e84dede0b54c6109381643324ff8391d15643b260b7d480dcb74fd178f71. Mar 4 01:15:34.747791 containerd[1451]: time="2026-03-04T01:15:34.747654172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9d9d2d80db43d7ba140b0b747691ec45,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c5416020989ba925a997cec67bfbd08eb85fe5fc36cfe7eaf16527886957d7c\"" Mar 4 01:15:34.750357 kubelet[2129]: E0304 01:15:34.750292 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:34.753554 containerd[1451]: time="2026-03-04T01:15:34.753482992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"55150043943252713da2400a11f4e76fffaf7413278d4b262275c125e31d0cd9\"" Mar 4 01:15:34.759484 kubelet[2129]: E0304 01:15:34.759426 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:34.766328 containerd[1451]: time="2026-03-04T01:15:34.766305676Z" level=info msg="CreateContainer within sandbox \"8c5416020989ba925a997cec67bfbd08eb85fe5fc36cfe7eaf16527886957d7c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 01:15:34.767313 containerd[1451]: time="2026-03-04T01:15:34.766885777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dde7e84dede0b54c6109381643324ff8391d15643b260b7d480dcb74fd178f71\"" Mar 4 01:15:34.768392 kubelet[2129]: E0304 01:15:34.767713 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:34.769889 containerd[1451]: time="2026-03-04T01:15:34.769724407Z" level=info msg="CreateContainer within sandbox \"55150043943252713da2400a11f4e76fffaf7413278d4b262275c125e31d0cd9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 01:15:34.774625 containerd[1451]: time="2026-03-04T01:15:34.774571023Z" level=info msg="CreateContainer within sandbox \"dde7e84dede0b54c6109381643324ff8391d15643b260b7d480dcb74fd178f71\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 01:15:34.788148 containerd[1451]: time="2026-03-04T01:15:34.787968460Z" level=info msg="CreateContainer within sandbox \"8c5416020989ba925a997cec67bfbd08eb85fe5fc36cfe7eaf16527886957d7c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f38b7c7fba9aa7c8eaa40cde7b452e2ff74e193cb73ecc80a0caa971286a512\"" Mar 4 01:15:34.788816 containerd[1451]: time="2026-03-04T01:15:34.788675294Z" level=info msg="StartContainer for \"4f38b7c7fba9aa7c8eaa40cde7b452e2ff74e193cb73ecc80a0caa971286a512\"" Mar 4 01:15:34.801157 containerd[1451]: time="2026-03-04T01:15:34.801083265Z" level=info msg="CreateContainer within sandbox \"55150043943252713da2400a11f4e76fffaf7413278d4b262275c125e31d0cd9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"829abed7c1da88a61a8c0c1c96f4cb110121150f35209072f2de9471c3d54fa1\"" Mar 4 01:15:34.802352 containerd[1451]: time="2026-03-04T01:15:34.802324225Z" level=info msg="StartContainer for \"829abed7c1da88a61a8c0c1c96f4cb110121150f35209072f2de9471c3d54fa1\"" Mar 4 01:15:34.808080 containerd[1451]: time="2026-03-04T01:15:34.808012979Z" level=info msg="CreateContainer within sandbox \"dde7e84dede0b54c6109381643324ff8391d15643b260b7d480dcb74fd178f71\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b67d777fdd47e3293a27ad86d33d34aa911eea7ba430df9343b2f6562b9ba6f\"" Mar 4 01:15:34.809516 containerd[1451]: time="2026-03-04T01:15:34.808484304Z" level=info msg="StartContainer for \"9b67d777fdd47e3293a27ad86d33d34aa911eea7ba430df9343b2f6562b9ba6f\"" Mar 4 01:15:34.832625 systemd[1]: Started cri-containerd-4f38b7c7fba9aa7c8eaa40cde7b452e2ff74e193cb73ecc80a0caa971286a512.scope - libcontainer container 4f38b7c7fba9aa7c8eaa40cde7b452e2ff74e193cb73ecc80a0caa971286a512. Mar 4 01:15:34.848398 systemd[1]: Started cri-containerd-829abed7c1da88a61a8c0c1c96f4cb110121150f35209072f2de9471c3d54fa1.scope - libcontainer container 829abed7c1da88a61a8c0c1c96f4cb110121150f35209072f2de9471c3d54fa1. Mar 4 01:15:34.866952 systemd[1]: Started cri-containerd-9b67d777fdd47e3293a27ad86d33d34aa911eea7ba430df9343b2f6562b9ba6f.scope - libcontainer container 9b67d777fdd47e3293a27ad86d33d34aa911eea7ba430df9343b2f6562b9ba6f. Mar 4 01:15:34.905312 containerd[1451]: time="2026-03-04T01:15:34.905083399Z" level=info msg="StartContainer for \"4f38b7c7fba9aa7c8eaa40cde7b452e2ff74e193cb73ecc80a0caa971286a512\" returns successfully" Mar 4 01:15:34.925091 containerd[1451]: time="2026-03-04T01:15:34.924998456Z" level=info msg="StartContainer for \"829abed7c1da88a61a8c0c1c96f4cb110121150f35209072f2de9471c3d54fa1\" returns successfully" Mar 4 01:15:34.925324 containerd[1451]: time="2026-03-04T01:15:34.925252535Z" level=info msg="StartContainer for \"9b67d777fdd47e3293a27ad86d33d34aa911eea7ba430df9343b2f6562b9ba6f\" returns successfully" Mar 4 01:15:35.214899 kubelet[2129]: I0304 01:15:35.214816 2129 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:15:35.681435 kubelet[2129]: E0304 01:15:35.681163 2129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:15:35.681435 kubelet[2129]: E0304 01:15:35.681328 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:35.685920 kubelet[2129]: E0304 01:15:35.685874 2129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:15:35.686056 kubelet[2129]: E0304 01:15:35.686003 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:35.700147 kubelet[2129]: E0304 01:15:35.699938 2129 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 4 01:15:35.700396 kubelet[2129]: E0304 01:15:35.700178 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:35.954101 kubelet[2129]: E0304 01:15:35.953563 2129 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 4 01:15:36.036824 kubelet[2129]: I0304 01:15:36.036659 2129 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 4 01:15:36.036824 kubelet[2129]: E0304 01:15:36.036715 2129 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 4 01:15:36.048980 kubelet[2129]: E0304 01:15:36.048923 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.149807 kubelet[2129]: E0304 01:15:36.149615 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.250284 kubelet[2129]: E0304 01:15:36.249962 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.351312 kubelet[2129]: E0304 01:15:36.351163 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.452252 kubelet[2129]: E0304 01:15:36.452014 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.553439 kubelet[2129]: E0304 01:15:36.553195 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.654517 kubelet[2129]: E0304 01:15:36.654402 2129 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 4 01:15:36.697680 kubelet[2129]: I0304 01:15:36.697591 2129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:36.698246 kubelet[2129]: I0304 01:15:36.698237 2129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:36.705102 kubelet[2129]: E0304 01:15:36.705045 2129 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:36.705363 kubelet[2129]: E0304 01:15:36.705311 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:36.705993 kubelet[2129]: E0304 01:15:36.705813 2129 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:36.705993 kubelet[2129]: E0304 01:15:36.705935 2129 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:36.725030 kubelet[2129]: I0304 01:15:36.724956 2129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:36.741106 kubelet[2129]: E0304 01:15:36.741044 2129 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:36.741247 kubelet[2129]: I0304 01:15:36.741158 2129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:36.743566 kubelet[2129]: E0304 01:15:36.743543 2129 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:36.744310 kubelet[2129]: I0304 01:15:36.743689 2129 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:36.745591 kubelet[2129]: E0304 01:15:36.745548 2129 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:37.607264 kubelet[2129]: I0304 01:15:37.607110 2129 apiserver.go:52] "Watching apiserver" Mar 4 01:15:37.625285 kubelet[2129]: I0304 01:15:37.625179 2129 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:15:38.172821 systemd[1]: Reloading requested from client PID 2419 ('systemctl') (unit session-7.scope)... Mar 4 01:15:38.172856 systemd[1]: Reloading... Mar 4 01:15:38.267404 zram_generator::config[2458]: No configuration found. Mar 4 01:15:38.415554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 01:15:38.516323 systemd[1]: Reloading finished in 342 ms. Mar 4 01:15:38.583251 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:38.599902 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 01:15:38.600287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:38.600357 systemd[1]: kubelet.service: Consumed 1.050s CPU time, 130.2M memory peak, 0B memory swap peak. Mar 4 01:15:38.610312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 01:15:38.783505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 01:15:38.803450 (kubelet)[2503]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 01:15:38.869599 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 01:15:38.880412 kubelet[2503]: I0304 01:15:38.879979 2503 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 4 01:15:38.880412 kubelet[2503]: I0304 01:15:38.880011 2503 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 01:15:38.880412 kubelet[2503]: I0304 01:15:38.880028 2503 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 01:15:38.880412 kubelet[2503]: I0304 01:15:38.880033 2503 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 01:15:38.880412 kubelet[2503]: I0304 01:15:38.880275 2503 server.go:951] "Client rotation is on, will bootstrap in background" Mar 4 01:15:38.881688 kubelet[2503]: I0304 01:15:38.881666 2503 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 01:15:38.884497 kubelet[2503]: I0304 01:15:38.884479 2503 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 01:15:38.888849 kubelet[2503]: E0304 01:15:38.888816 2503 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 01:15:38.888988 kubelet[2503]: I0304 01:15:38.888972 2503 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 01:15:38.900211 kubelet[2503]: I0304 01:15:38.900114 2503 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 01:15:38.900731 kubelet[2503]: I0304 01:15:38.900641 2503 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 01:15:38.901024 kubelet[2503]: I0304 01:15:38.900703 2503 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 01:15:38.901024 kubelet[2503]: I0304 01:15:38.901007 2503 topology_manager.go:143] "Creating topology manager with none policy" Mar 4 01:15:38.901024 kubelet[2503]: I0304 01:15:38.901021 2503 container_manager_linux.go:308] "Creating device plugin manager" Mar 4 01:15:38.901298 kubelet[2503]: I0304 01:15:38.901049 2503 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 01:15:38.901459 kubelet[2503]: I0304 01:15:38.901384 2503 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 4 01:15:38.901889 kubelet[2503]: I0304 01:15:38.901737 2503 kubelet.go:482] "Attempting to sync node with API server" Mar 4 01:15:38.901889 kubelet[2503]: I0304 01:15:38.901868 2503 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 01:15:38.901978 kubelet[2503]: I0304 01:15:38.901924 2503 kubelet.go:394] "Adding apiserver pod source" Mar 4 01:15:38.901978 kubelet[2503]: I0304 01:15:38.901939 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 01:15:38.910803 kubelet[2503]: I0304 01:15:38.908358 2503 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 01:15:38.912190 kubelet[2503]: I0304 01:15:38.912088 2503 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 01:15:38.912190 kubelet[2503]: I0304 01:15:38.912184 2503 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 01:15:38.925266 kubelet[2503]: I0304 01:15:38.925246 2503 server.go:1257] "Started kubelet" Mar 4 01:15:38.928514 kubelet[2503]: I0304 01:15:38.927855 2503 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 01:15:38.930637 kubelet[2503]: I0304 01:15:38.930553 2503 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 01:15:38.930637 kubelet[2503]: I0304 01:15:38.930643 2503 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 01:15:38.930925 kubelet[2503]: I0304 01:15:38.930592 2503 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 4 01:15:38.931499 kubelet[2503]: I0304 01:15:38.931411 2503 server.go:317] "Adding debug handlers to kubelet server" Mar 4 01:15:38.932188 kubelet[2503]: I0304 01:15:38.932107 2503 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 01:15:38.932248 kubelet[2503]: I0304 01:15:38.932223 2503 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 01:15:38.938311 kubelet[2503]: I0304 01:15:38.938241 2503 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 4 01:15:38.939176 kubelet[2503]: I0304 01:15:38.939063 2503 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 01:15:38.940532 kubelet[2503]: I0304 01:15:38.939358 2503 reconciler.go:29] "Reconciler: start to sync state" Mar 4 01:15:38.943042 kubelet[2503]: E0304 01:15:38.942905 2503 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 01:15:38.943589 kubelet[2503]: I0304 01:15:38.943251 2503 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 01:15:38.946017 kubelet[2503]: I0304 01:15:38.945975 2503 factory.go:223] Registration of the containerd container factory successfully Mar 4 01:15:38.946017 kubelet[2503]: I0304 01:15:38.946008 2503 factory.go:223] Registration of the systemd container factory successfully Mar 4 01:15:38.957277 kubelet[2503]: I0304 01:15:38.957210 2503 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 01:15:38.959080 kubelet[2503]: I0304 01:15:38.959006 2503 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 01:15:38.959080 kubelet[2503]: I0304 01:15:38.959053 2503 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 4 01:15:38.959241 kubelet[2503]: I0304 01:15:38.959087 2503 kubelet.go:2501] "Starting kubelet main sync loop" Mar 4 01:15:38.959241 kubelet[2503]: E0304 01:15:38.959202 2503 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 01:15:39.009303 kubelet[2503]: I0304 01:15:39.009212 2503 cpu_manager.go:225] "Starting" policy="none" Mar 4 01:15:39.009303 kubelet[2503]: I0304 01:15:39.009261 2503 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 4 01:15:39.009303 kubelet[2503]: I0304 01:15:39.009284 2503 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 4 01:15:39.009602 kubelet[2503]: I0304 01:15:39.009503 2503 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 4 01:15:39.009602 kubelet[2503]: I0304 01:15:39.009556 2503 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 4 01:15:39.009602 kubelet[2503]: I0304 01:15:39.009581 2503 policy_none.go:50] "Start" Mar 4 01:15:39.009602 kubelet[2503]: I0304 01:15:39.009592 2503 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 01:15:39.009697 kubelet[2503]: I0304 01:15:39.009615 2503 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 01:15:39.009928 kubelet[2503]: I0304 01:15:39.009866 2503 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 01:15:39.009928 kubelet[2503]: I0304 01:15:39.009886 2503 policy_none.go:44] "Start" Mar 4 01:15:39.020525 kubelet[2503]: E0304 01:15:39.020459 2503 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 01:15:39.020991 kubelet[2503]: I0304 01:15:39.020644 2503 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 4 01:15:39.020991 kubelet[2503]: I0304 01:15:39.020657 2503 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 01:15:39.021228 kubelet[2503]: I0304 01:15:39.020991 2503 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 4 01:15:39.024017 kubelet[2503]: E0304 01:15:39.024002 2503 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 01:15:39.061610 kubelet[2503]: I0304 01:15:39.060888 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:39.061610 kubelet[2503]: I0304 01:15:39.060957 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:39.061610 kubelet[2503]: I0304 01:15:39.061277 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:39.129829 kubelet[2503]: I0304 01:15:39.129676 2503 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 4 01:15:39.140298 kubelet[2503]: I0304 01:15:39.140120 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9d9d2d80db43d7ba140b0b747691ec45-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d9d2d80db43d7ba140b0b747691ec45\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:39.142056 kubelet[2503]: I0304 01:15:39.141877 2503 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 4 01:15:39.142056 kubelet[2503]: I0304 01:15:39.141967 2503 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 4 01:15:39.177868 sudo[2545]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 4 01:15:39.178309 sudo[2545]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 4 01:15:39.241234 kubelet[2503]: I0304 01:15:39.241079 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:39.241234 kubelet[2503]: I0304 01:15:39.241213 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:39.241390 kubelet[2503]: I0304 01:15:39.241247 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:39.241390 kubelet[2503]: I0304 01:15:39.241302 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9d9d2d80db43d7ba140b0b747691ec45-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9d9d2d80db43d7ba140b0b747691ec45\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:39.241390 kubelet[2503]: I0304 01:15:39.241329 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:39.241390 kubelet[2503]: I0304 01:15:39.241352 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:39.241390 kubelet[2503]: I0304 01:15:39.241373 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9d9d2d80db43d7ba140b0b747691ec45-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9d9d2d80db43d7ba140b0b747691ec45\") " pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:39.241499 kubelet[2503]: I0304 01:15:39.241395 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 4 01:15:39.368383 kubelet[2503]: E0304 01:15:39.368086 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:39.371280 kubelet[2503]: E0304 01:15:39.371220 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:39.371384 kubelet[2503]: E0304 01:15:39.371354 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:39.715183 sudo[2545]: pam_unix(sudo:session): session closed for user root Mar 4 01:15:39.906844 kubelet[2503]: I0304 01:15:39.904907 2503 apiserver.go:52] "Watching apiserver" Mar 4 01:15:39.939340 kubelet[2503]: I0304 01:15:39.939255 2503 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 01:15:39.977980 kubelet[2503]: E0304 01:15:39.977713 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:39.978402 kubelet[2503]: I0304 01:15:39.978217 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:39.978402 kubelet[2503]: I0304 01:15:39.978353 2503 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:39.990624 kubelet[2503]: E0304 01:15:39.990505 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 4 01:15:39.990956 kubelet[2503]: E0304 01:15:39.990704 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:39.992568 kubelet[2503]: E0304 01:15:39.992408 2503 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 4 01:15:39.992637 kubelet[2503]: E0304 01:15:39.992578 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:40.979931 kubelet[2503]: E0304 01:15:40.979835 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:40.980510 kubelet[2503]: E0304 01:15:40.979949 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:41.089065 sudo[1630]: pam_unix(sudo:session): session closed for user root Mar 4 01:15:41.091210 sshd[1627]: pam_unix(sshd:session): session closed for user core Mar 4 01:15:41.096494 systemd[1]: sshd@6-10.0.0.125:22-10.0.0.1:57444.service: Deactivated successfully. Mar 4 01:15:41.098546 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 01:15:41.098858 systemd[1]: session-7.scope: Consumed 3.971s CPU time, 160.1M memory peak, 0B memory swap peak. Mar 4 01:15:41.099860 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Mar 4 01:15:41.101537 systemd-logind[1443]: Removed session 7. Mar 4 01:15:43.200232 kubelet[2503]: I0304 01:15:43.200073 2503 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 01:15:43.200855 containerd[1451]: time="2026-03-04T01:15:43.200735771Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 01:15:43.201285 kubelet[2503]: I0304 01:15:43.200959 2503 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 01:15:43.988363 systemd[1]: Created slice kubepods-besteffort-pod5ef7bfb4_82b7_4d38_8fde_f0e866697073.slice - libcontainer container kubepods-besteffort-pod5ef7bfb4_82b7_4d38_8fde_f0e866697073.slice. Mar 4 01:15:44.002037 systemd[1]: Created slice kubepods-burstable-pod580c0bac_6b83_42cc_830e_0548fc4ffa5e.slice - libcontainer container kubepods-burstable-pod580c0bac_6b83_42cc_830e_0548fc4ffa5e.slice. Mar 4 01:15:44.016814 kubelet[2503]: I0304 01:15:44.016589 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.016571922 podStartE2EDuration="5.016571922s" podCreationTimestamp="2026-03-04 01:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:15:44.005250674 +0000 UTC m=+5.193941509" watchObservedRunningTime="2026-03-04 01:15:44.016571922 +0000 UTC m=+5.205262767" Mar 4 01:15:44.027477 kubelet[2503]: I0304 01:15:44.027327 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.027313568 podStartE2EDuration="5.027313568s" podCreationTimestamp="2026-03-04 01:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:15:44.016981781 +0000 UTC m=+5.205672616" watchObservedRunningTime="2026-03-04 01:15:44.027313568 +0000 UTC m=+5.216004403" Mar 4 01:15:44.027711 kubelet[2503]: I0304 01:15:44.027594 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.027590069 podStartE2EDuration="5.027590069s" podCreationTimestamp="2026-03-04 01:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:15:44.027313698 +0000 UTC m=+5.216004533" watchObservedRunningTime="2026-03-04 01:15:44.027590069 +0000 UTC m=+5.216280914" Mar 4 01:15:44.082719 kubelet[2503]: I0304 01:15:44.082495 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-config-path\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.082719 kubelet[2503]: I0304 01:15:44.082588 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hubble-tls\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.082719 kubelet[2503]: I0304 01:15:44.082616 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jdk\" (UniqueName: \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-kube-api-access-47jdk\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.082719 kubelet[2503]: I0304 01:15:44.082642 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hostproc\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.082719 kubelet[2503]: I0304 01:15:44.082665 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-etc-cni-netd\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.082719 kubelet[2503]: I0304 01:15:44.082689 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/580c0bac-6b83-42cc-830e-0548fc4ffa5e-clustermesh-secrets\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083093 kubelet[2503]: I0304 01:15:44.082712 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-net\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083093 kubelet[2503]: I0304 01:15:44.082989 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ef7bfb4-82b7-4d38-8fde-f0e866697073-xtables-lock\") pod \"kube-proxy-rg4qb\" (UID: \"5ef7bfb4-82b7-4d38-8fde-f0e866697073\") " pod="kube-system/kube-proxy-rg4qb" Mar 4 01:15:44.083093 kubelet[2503]: I0304 01:15:44.083028 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-run\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083093 kubelet[2503]: I0304 01:15:44.083052 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-bpf-maps\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083093 kubelet[2503]: I0304 01:15:44.083072 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cni-path\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083343 kubelet[2503]: I0304 01:15:44.083275 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-kernel\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083368 kubelet[2503]: I0304 01:15:44.083335 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5ef7bfb4-82b7-4d38-8fde-f0e866697073-kube-proxy\") pod \"kube-proxy-rg4qb\" (UID: \"5ef7bfb4-82b7-4d38-8fde-f0e866697073\") " pod="kube-system/kube-proxy-rg4qb" Mar 4 01:15:44.083389 kubelet[2503]: I0304 01:15:44.083367 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhgd\" (UniqueName: \"kubernetes.io/projected/5ef7bfb4-82b7-4d38-8fde-f0e866697073-kube-api-access-nrhgd\") pod \"kube-proxy-rg4qb\" (UID: \"5ef7bfb4-82b7-4d38-8fde-f0e866697073\") " pod="kube-system/kube-proxy-rg4qb" Mar 4 01:15:44.083414 kubelet[2503]: I0304 01:15:44.083391 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-lib-modules\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083437 kubelet[2503]: I0304 01:15:44.083424 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-xtables-lock\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.083478 kubelet[2503]: I0304 01:15:44.083449 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ef7bfb4-82b7-4d38-8fde-f0e866697073-lib-modules\") pod \"kube-proxy-rg4qb\" (UID: \"5ef7bfb4-82b7-4d38-8fde-f0e866697073\") " pod="kube-system/kube-proxy-rg4qb" Mar 4 01:15:44.083523 kubelet[2503]: I0304 01:15:44.083476 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-cgroup\") pod \"cilium-mx5kk\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " pod="kube-system/cilium-mx5kk" Mar 4 01:15:44.306293 kubelet[2503]: E0304 01:15:44.305576 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:44.310349 containerd[1451]: time="2026-03-04T01:15:44.309924448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg4qb,Uid:5ef7bfb4-82b7-4d38-8fde-f0e866697073,Namespace:kube-system,Attempt:0,}" Mar 4 01:15:44.314659 kubelet[2503]: E0304 01:15:44.314332 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:44.315434 containerd[1451]: time="2026-03-04T01:15:44.315292619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx5kk,Uid:580c0bac-6b83-42cc-830e-0548fc4ffa5e,Namespace:kube-system,Attempt:0,}" Mar 4 01:15:44.361254 containerd[1451]: time="2026-03-04T01:15:44.361053311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:15:44.361254 containerd[1451]: time="2026-03-04T01:15:44.361197638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:15:44.361254 containerd[1451]: time="2026-03-04T01:15:44.361235468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:44.361598 containerd[1451]: time="2026-03-04T01:15:44.361404882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:44.371665 containerd[1451]: time="2026-03-04T01:15:44.366367711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:15:44.371665 containerd[1451]: time="2026-03-04T01:15:44.366611131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:15:44.371665 containerd[1451]: time="2026-03-04T01:15:44.366630487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:44.371665 containerd[1451]: time="2026-03-04T01:15:44.367870161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:44.397257 systemd[1]: Started cri-containerd-238626d49e44056271dbfa38af70d36372d6344dfa0aea580c24f584f20043de.scope - libcontainer container 238626d49e44056271dbfa38af70d36372d6344dfa0aea580c24f584f20043de. Mar 4 01:15:44.427017 systemd[1]: Started cri-containerd-e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5.scope - libcontainer container e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5. Mar 4 01:15:44.491540 systemd[1]: Created slice kubepods-besteffort-pod1704a8ea_1822_4a66_9f90_ea5d365e8ad1.slice - libcontainer container kubepods-besteffort-pod1704a8ea_1822_4a66_9f90_ea5d365e8ad1.slice. Mar 4 01:15:44.493925 kubelet[2503]: I0304 01:15:44.493559 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-cilium-config-path\") pod \"cilium-operator-78cf5644cb-wtk6n\" (UID: \"1704a8ea-1822-4a66-9f90-ea5d365e8ad1\") " pod="kube-system/cilium-operator-78cf5644cb-wtk6n" Mar 4 01:15:44.493925 kubelet[2503]: I0304 01:15:44.493625 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8bxv\" (UniqueName: \"kubernetes.io/projected/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-kube-api-access-g8bxv\") pod \"cilium-operator-78cf5644cb-wtk6n\" (UID: \"1704a8ea-1822-4a66-9f90-ea5d365e8ad1\") " pod="kube-system/cilium-operator-78cf5644cb-wtk6n" Mar 4 01:15:44.510883 containerd[1451]: time="2026-03-04T01:15:44.510550367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rg4qb,Uid:5ef7bfb4-82b7-4d38-8fde-f0e866697073,Namespace:kube-system,Attempt:0,} returns sandbox id \"238626d49e44056271dbfa38af70d36372d6344dfa0aea580c24f584f20043de\"" Mar 4 01:15:44.516268 kubelet[2503]: E0304 01:15:44.516213 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:44.521301 containerd[1451]: time="2026-03-04T01:15:44.521121610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mx5kk,Uid:580c0bac-6b83-42cc-830e-0548fc4ffa5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\"" Mar 4 01:15:44.523248 kubelet[2503]: E0304 01:15:44.523211 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:44.528788 containerd[1451]: time="2026-03-04T01:15:44.528455128Z" level=info msg="CreateContainer within sandbox \"238626d49e44056271dbfa38af70d36372d6344dfa0aea580c24f584f20043de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 01:15:44.529312 containerd[1451]: time="2026-03-04T01:15:44.529083180Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 4 01:15:44.575324 containerd[1451]: time="2026-03-04T01:15:44.574485167Z" level=info msg="CreateContainer within sandbox \"238626d49e44056271dbfa38af70d36372d6344dfa0aea580c24f584f20043de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1f86b2f66ae890aa63413837b3c24b2e205f220ab4ee5598d0fdcec0be2565c2\"" Mar 4 01:15:44.575911 containerd[1451]: time="2026-03-04T01:15:44.575717477Z" level=info msg="StartContainer for \"1f86b2f66ae890aa63413837b3c24b2e205f220ab4ee5598d0fdcec0be2565c2\"" Mar 4 01:15:44.639993 systemd[1]: Started cri-containerd-1f86b2f66ae890aa63413837b3c24b2e205f220ab4ee5598d0fdcec0be2565c2.scope - libcontainer container 1f86b2f66ae890aa63413837b3c24b2e205f220ab4ee5598d0fdcec0be2565c2. Mar 4 01:15:44.944219 kubelet[2503]: E0304 01:15:44.926574 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:45.027647 containerd[1451]: time="2026-03-04T01:15:45.022179542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-wtk6n,Uid:1704a8ea-1822-4a66-9f90-ea5d365e8ad1,Namespace:kube-system,Attempt:0,}" Mar 4 01:15:45.879426 containerd[1451]: time="2026-03-04T01:15:45.877307424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:15:45.879426 containerd[1451]: time="2026-03-04T01:15:45.877682447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:15:45.879426 containerd[1451]: time="2026-03-04T01:15:45.877698918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:45.879426 containerd[1451]: time="2026-03-04T01:15:45.879428850Z" level=info msg="StartContainer for \"1f86b2f66ae890aa63413837b3c24b2e205f220ab4ee5598d0fdcec0be2565c2\" returns successfully" Mar 4 01:15:45.880437 containerd[1451]: time="2026-03-04T01:15:45.880386994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:15:45.940071 systemd[1]: Started cri-containerd-1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb.scope - libcontainer container 1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb. Mar 4 01:15:46.023717 containerd[1451]: time="2026-03-04T01:15:46.021698107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-wtk6n,Uid:1704a8ea-1822-4a66-9f90-ea5d365e8ad1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb\"" Mar 4 01:15:46.027349 kubelet[2503]: E0304 01:15:46.027208 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:46.190892 kubelet[2503]: E0304 01:15:46.189428 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:47.227166 kubelet[2503]: E0304 01:15:47.226984 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:47.391834 kubelet[2503]: E0304 01:15:47.389977 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:47.430495 kubelet[2503]: I0304 01:15:47.429729 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rg4qb" podStartSLOduration=4.429714698 podStartE2EDuration="4.429714698s" podCreationTimestamp="2026-03-04 01:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:15:46.237033529 +0000 UTC m=+7.425724364" watchObservedRunningTime="2026-03-04 01:15:47.429714698 +0000 UTC m=+8.618405533" Mar 4 01:15:49.225163 kubelet[2503]: E0304 01:15:49.224960 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:49.729857 kubelet[2503]: E0304 01:15:49.729639 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:50.253058 kubelet[2503]: E0304 01:15:50.253007 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:54.722019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1616469378.mount: Deactivated successfully. Mar 4 01:15:55.754964 update_engine[1446]: I20260304 01:15:55.754686 1446 update_attempter.cc:509] Updating boot flags... Mar 4 01:15:55.812072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2915) Mar 4 01:15:55.879910 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2913) Mar 4 01:15:55.929899 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2913) Mar 4 01:15:57.158054 containerd[1451]: time="2026-03-04T01:15:57.157934884Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:57.159040 containerd[1451]: time="2026-03-04T01:15:57.158908318Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 4 01:15:57.160613 containerd[1451]: time="2026-03-04T01:15:57.160498672Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:57.162190 containerd[1451]: time="2026-03-04T01:15:57.162106893Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.632994208s" Mar 4 01:15:57.162190 containerd[1451]: time="2026-03-04T01:15:57.162186200Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 4 01:15:57.163724 containerd[1451]: time="2026-03-04T01:15:57.163629804Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 4 01:15:57.169964 containerd[1451]: time="2026-03-04T01:15:57.169928127Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 01:15:57.189482 containerd[1451]: time="2026-03-04T01:15:57.189410818Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\"" Mar 4 01:15:57.190321 containerd[1451]: time="2026-03-04T01:15:57.190229129Z" level=info msg="StartContainer for \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\"" Mar 4 01:15:57.250102 systemd[1]: Started cri-containerd-a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3.scope - libcontainer container a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3. Mar 4 01:15:57.333243 containerd[1451]: time="2026-03-04T01:15:57.333155737Z" level=info msg="StartContainer for \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\" returns successfully" Mar 4 01:15:57.343050 systemd[1]: cri-containerd-a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3.scope: Deactivated successfully. Mar 4 01:15:57.403821 kubelet[2503]: E0304 01:15:57.402396 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:57.436474 containerd[1451]: time="2026-03-04T01:15:57.435665205Z" level=info msg="shim disconnected" id=a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3 namespace=k8s.io Mar 4 01:15:57.436474 containerd[1451]: time="2026-03-04T01:15:57.435732551Z" level=warning msg="cleaning up after shim disconnected" id=a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3 namespace=k8s.io Mar 4 01:15:57.436474 containerd[1451]: time="2026-03-04T01:15:57.435784486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:15:58.184178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3-rootfs.mount: Deactivated successfully. Mar 4 01:15:58.285824 kubelet[2503]: E0304 01:15:58.285053 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:58.293584 containerd[1451]: time="2026-03-04T01:15:58.293416195Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 01:15:58.312939 containerd[1451]: time="2026-03-04T01:15:58.312638364Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\"" Mar 4 01:15:58.316278 containerd[1451]: time="2026-03-04T01:15:58.316180661Z" level=info msg="StartContainer for \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\"" Mar 4 01:15:58.358060 systemd[1]: Started cri-containerd-5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897.scope - libcontainer container 5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897. Mar 4 01:15:58.407648 containerd[1451]: time="2026-03-04T01:15:58.407431644Z" level=info msg="StartContainer for \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\" returns successfully" Mar 4 01:15:58.427703 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 01:15:58.428436 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:15:58.428513 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:15:58.436363 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 01:15:58.436874 systemd[1]: cri-containerd-5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897.scope: Deactivated successfully. Mar 4 01:15:58.470466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 01:15:58.524238 containerd[1451]: time="2026-03-04T01:15:58.524109366Z" level=info msg="shim disconnected" id=5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897 namespace=k8s.io Mar 4 01:15:58.524238 containerd[1451]: time="2026-03-04T01:15:58.524245199Z" level=warning msg="cleaning up after shim disconnected" id=5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897 namespace=k8s.io Mar 4 01:15:58.524238 containerd[1451]: time="2026-03-04T01:15:58.524261480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:15:58.648335 containerd[1451]: time="2026-03-04T01:15:58.648223627Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:58.649492 containerd[1451]: time="2026-03-04T01:15:58.649420270Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 4 01:15:58.651193 containerd[1451]: time="2026-03-04T01:15:58.651081464Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 01:15:58.653007 containerd[1451]: time="2026-03-04T01:15:58.652952214Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.489248983s" Mar 4 01:15:58.653111 containerd[1451]: time="2026-03-04T01:15:58.653019508Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 4 01:15:58.659045 containerd[1451]: time="2026-03-04T01:15:58.659006214Z" level=info msg="CreateContainer within sandbox \"1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 4 01:15:58.673401 containerd[1451]: time="2026-03-04T01:15:58.673313534Z" level=info msg="CreateContainer within sandbox \"1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\"" Mar 4 01:15:58.674408 containerd[1451]: time="2026-03-04T01:15:58.674303521Z" level=info msg="StartContainer for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\"" Mar 4 01:15:58.719238 systemd[1]: Started cri-containerd-5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a.scope - libcontainer container 5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a. Mar 4 01:15:58.759863 containerd[1451]: time="2026-03-04T01:15:58.759668827Z" level=info msg="StartContainer for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" returns successfully" Mar 4 01:15:59.184670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897-rootfs.mount: Deactivated successfully. Mar 4 01:15:59.227495 kubelet[2503]: E0304 01:15:59.227439 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:59.295180 kubelet[2503]: E0304 01:15:59.292542 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:59.302679 containerd[1451]: time="2026-03-04T01:15:59.302631035Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 01:15:59.305809 kubelet[2503]: E0304 01:15:59.305523 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:15:59.350882 containerd[1451]: time="2026-03-04T01:15:59.348317434Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\"" Mar 4 01:15:59.350882 containerd[1451]: time="2026-03-04T01:15:59.350579910Z" level=info msg="StartContainer for \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\"" Mar 4 01:15:59.445989 systemd[1]: Started cri-containerd-8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452.scope - libcontainer container 8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452. Mar 4 01:15:59.483815 containerd[1451]: time="2026-03-04T01:15:59.483615261Z" level=info msg="StartContainer for \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\" returns successfully" Mar 4 01:15:59.487921 systemd[1]: cri-containerd-8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452.scope: Deactivated successfully. Mar 4 01:15:59.563086 containerd[1451]: time="2026-03-04T01:15:59.563007196Z" level=info msg="shim disconnected" id=8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452 namespace=k8s.io Mar 4 01:15:59.563086 containerd[1451]: time="2026-03-04T01:15:59.563087395Z" level=warning msg="cleaning up after shim disconnected" id=8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452 namespace=k8s.io Mar 4 01:15:59.563389 containerd[1451]: time="2026-03-04T01:15:59.563100169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:16:00.184224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452-rootfs.mount: Deactivated successfully. Mar 4 01:16:00.307831 kubelet[2503]: E0304 01:16:00.307666 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:00.308409 kubelet[2503]: E0304 01:16:00.308113 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:00.314978 containerd[1451]: time="2026-03-04T01:16:00.314906431Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 01:16:00.330857 kubelet[2503]: I0304 01:16:00.329239 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-wtk6n" podStartSLOduration=3.704474686 podStartE2EDuration="16.329225861s" podCreationTimestamp="2026-03-04 01:15:44 +0000 UTC" firstStartedPulling="2026-03-04 01:15:46.029462259 +0000 UTC m=+7.218153104" lastFinishedPulling="2026-03-04 01:15:58.654213434 +0000 UTC m=+19.842904279" observedRunningTime="2026-03-04 01:15:59.385373777 +0000 UTC m=+20.574064612" watchObservedRunningTime="2026-03-04 01:16:00.329225861 +0000 UTC m=+21.517916696" Mar 4 01:16:00.339585 containerd[1451]: time="2026-03-04T01:16:00.339474684Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\"" Mar 4 01:16:00.339715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633146301.mount: Deactivated successfully. Mar 4 01:16:00.340479 containerd[1451]: time="2026-03-04T01:16:00.340272912Z" level=info msg="StartContainer for \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\"" Mar 4 01:16:00.378947 systemd[1]: Started cri-containerd-6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176.scope - libcontainer container 6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176. Mar 4 01:16:00.409362 systemd[1]: cri-containerd-6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176.scope: Deactivated successfully. Mar 4 01:16:00.413160 containerd[1451]: time="2026-03-04T01:16:00.413013407Z" level=info msg="StartContainer for \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\" returns successfully" Mar 4 01:16:00.438792 containerd[1451]: time="2026-03-04T01:16:00.438534785Z" level=info msg="shim disconnected" id=6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176 namespace=k8s.io Mar 4 01:16:00.438792 containerd[1451]: time="2026-03-04T01:16:00.438609635Z" level=warning msg="cleaning up after shim disconnected" id=6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176 namespace=k8s.io Mar 4 01:16:00.438792 containerd[1451]: time="2026-03-04T01:16:00.438619073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:16:01.185080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176-rootfs.mount: Deactivated successfully. Mar 4 01:16:01.316530 kubelet[2503]: E0304 01:16:01.315729 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:01.324619 containerd[1451]: time="2026-03-04T01:16:01.324475787Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 01:16:01.354265 containerd[1451]: time="2026-03-04T01:16:01.354117528Z" level=info msg="CreateContainer within sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\"" Mar 4 01:16:01.355404 containerd[1451]: time="2026-03-04T01:16:01.355361111Z" level=info msg="StartContainer for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\"" Mar 4 01:16:01.413099 systemd[1]: Started cri-containerd-7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8.scope - libcontainer container 7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8. Mar 4 01:16:01.469984 containerd[1451]: time="2026-03-04T01:16:01.469824398Z" level=info msg="StartContainer for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" returns successfully" Mar 4 01:16:01.671244 kubelet[2503]: I0304 01:16:01.671156 2503 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 4 01:16:01.740206 systemd[1]: Created slice kubepods-burstable-podd58ab700_4803_4946_979d_c4041b91337a.slice - libcontainer container kubepods-burstable-podd58ab700_4803_4946_979d_c4041b91337a.slice. Mar 4 01:16:01.751382 systemd[1]: Created slice kubepods-burstable-pod3575bd71_7391_441b_a942_e6deba657dbb.slice - libcontainer container kubepods-burstable-pod3575bd71_7391_441b_a942_e6deba657dbb.slice. Mar 4 01:16:01.801097 kubelet[2503]: I0304 01:16:01.801039 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwjhb\" (UniqueName: \"kubernetes.io/projected/d58ab700-4803-4946-979d-c4041b91337a-kube-api-access-hwjhb\") pod \"coredns-7d764666f9-rp6vm\" (UID: \"d58ab700-4803-4946-979d-c4041b91337a\") " pod="kube-system/coredns-7d764666f9-rp6vm" Mar 4 01:16:01.801097 kubelet[2503]: I0304 01:16:01.801087 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3575bd71-7391-441b-a942-e6deba657dbb-config-volume\") pod \"coredns-7d764666f9-s4j9g\" (UID: \"3575bd71-7391-441b-a942-e6deba657dbb\") " pod="kube-system/coredns-7d764666f9-s4j9g" Mar 4 01:16:01.801097 kubelet[2503]: I0304 01:16:01.801103 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6fvf\" (UniqueName: \"kubernetes.io/projected/3575bd71-7391-441b-a942-e6deba657dbb-kube-api-access-j6fvf\") pod \"coredns-7d764666f9-s4j9g\" (UID: \"3575bd71-7391-441b-a942-e6deba657dbb\") " pod="kube-system/coredns-7d764666f9-s4j9g" Mar 4 01:16:01.801365 kubelet[2503]: I0304 01:16:01.801120 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d58ab700-4803-4946-979d-c4041b91337a-config-volume\") pod \"coredns-7d764666f9-rp6vm\" (UID: \"d58ab700-4803-4946-979d-c4041b91337a\") " pod="kube-system/coredns-7d764666f9-rp6vm" Mar 4 01:16:02.049192 kubelet[2503]: E0304 01:16:02.048641 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:02.060412 kubelet[2503]: E0304 01:16:02.058927 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:02.064271 containerd[1451]: time="2026-03-04T01:16:02.064216778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rp6vm,Uid:d58ab700-4803-4946-979d-c4041b91337a,Namespace:kube-system,Attempt:0,}" Mar 4 01:16:02.066046 containerd[1451]: time="2026-03-04T01:16:02.064834443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s4j9g,Uid:3575bd71-7391-441b-a942-e6deba657dbb,Namespace:kube-system,Attempt:0,}" Mar 4 01:16:02.191260 systemd[1]: run-containerd-runc-k8s.io-7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8-runc.8YlXZM.mount: Deactivated successfully. Mar 4 01:16:02.321182 kubelet[2503]: E0304 01:16:02.320925 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:02.338604 kubelet[2503]: I0304 01:16:02.338446 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-mx5kk" podStartSLOduration=2.5498122949999997 podStartE2EDuration="19.3384287s" podCreationTimestamp="2026-03-04 01:15:43 +0000 UTC" firstStartedPulling="2026-03-04 01:15:44.528455637 +0000 UTC m=+5.717146472" lastFinishedPulling="2026-03-04 01:16:01.317072041 +0000 UTC m=+22.505762877" observedRunningTime="2026-03-04 01:16:02.338091262 +0000 UTC m=+23.526782107" watchObservedRunningTime="2026-03-04 01:16:02.3384287 +0000 UTC m=+23.527119535" Mar 4 01:16:03.324020 kubelet[2503]: E0304 01:16:03.323875 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:03.802210 systemd-networkd[1388]: cilium_host: Link UP Mar 4 01:16:03.802450 systemd-networkd[1388]: cilium_net: Link UP Mar 4 01:16:03.802457 systemd-networkd[1388]: cilium_net: Gained carrier Mar 4 01:16:03.802720 systemd-networkd[1388]: cilium_host: Gained carrier Mar 4 01:16:03.806041 systemd-networkd[1388]: cilium_host: Gained IPv6LL Mar 4 01:16:03.949291 systemd-networkd[1388]: cilium_vxlan: Link UP Mar 4 01:16:03.949305 systemd-networkd[1388]: cilium_vxlan: Gained carrier Mar 4 01:16:04.211867 kernel: NET: Registered PF_ALG protocol family Mar 4 01:16:04.326023 kubelet[2503]: E0304 01:16:04.325849 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:04.480077 systemd-networkd[1388]: cilium_net: Gained IPv6LL Mar 4 01:16:05.061406 systemd-networkd[1388]: lxc_health: Link UP Mar 4 01:16:05.068337 systemd-networkd[1388]: lxc_health: Gained carrier Mar 4 01:16:05.159341 systemd-networkd[1388]: lxcbb66d9859056: Link UP Mar 4 01:16:05.168892 kernel: eth0: renamed from tmp5725d Mar 4 01:16:05.181483 systemd-networkd[1388]: lxcb590876384b5: Link UP Mar 4 01:16:05.197668 systemd-networkd[1388]: lxcbb66d9859056: Gained carrier Mar 4 01:16:05.197854 kernel: eth0: renamed from tmp59982 Mar 4 01:16:05.210385 systemd-networkd[1388]: lxcb590876384b5: Gained carrier Mar 4 01:16:05.824299 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Mar 4 01:16:06.311497 kubelet[2503]: E0304 01:16:06.311464 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:06.720130 systemd-networkd[1388]: lxcb590876384b5: Gained IPv6LL Mar 4 01:16:07.104047 systemd-networkd[1388]: lxc_health: Gained IPv6LL Mar 4 01:16:07.168091 systemd-networkd[1388]: lxcbb66d9859056: Gained IPv6LL Mar 4 01:16:08.597245 systemd[1]: Started sshd@7-10.0.0.125:22-10.0.0.1:46904.service - OpenSSH per-connection server daemon (10.0.0.1:46904). Mar 4 01:16:08.659715 sshd[3740]: Accepted publickey for core from 10.0.0.1 port 46904 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:08.661953 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:08.667480 systemd-logind[1443]: New session 8 of user core. Mar 4 01:16:08.671992 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 01:16:08.878032 sshd[3740]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:08.882331 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Mar 4 01:16:08.883597 systemd[1]: sshd@7-10.0.0.125:22-10.0.0.1:46904.service: Deactivated successfully. Mar 4 01:16:08.888122 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 01:16:08.891117 systemd-logind[1443]: Removed session 8. Mar 4 01:16:08.964131 containerd[1451]: time="2026-03-04T01:16:08.963895431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:16:08.964131 containerd[1451]: time="2026-03-04T01:16:08.963941928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:16:08.964131 containerd[1451]: time="2026-03-04T01:16:08.963955553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:16:08.964131 containerd[1451]: time="2026-03-04T01:16:08.964021526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:16:08.999942 systemd[1]: Started cri-containerd-59982f3f3f620076de54bc3694ddf423dcb68cb30274f5a09e638338165ab168.scope - libcontainer container 59982f3f3f620076de54bc3694ddf423dcb68cb30274f5a09e638338165ab168. Mar 4 01:16:09.008568 containerd[1451]: time="2026-03-04T01:16:09.008407207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:16:09.008627 containerd[1451]: time="2026-03-04T01:16:09.008585459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:16:09.008649 containerd[1451]: time="2026-03-04T01:16:09.008621125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:16:09.009144 containerd[1451]: time="2026-03-04T01:16:09.008816610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:16:09.018653 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:16:09.030909 systemd[1]: Started cri-containerd-5725d77c90853f0d42b90d313f28b06002397ec91df1408b9b2d2602d48afc41.scope - libcontainer container 5725d77c90853f0d42b90d313f28b06002397ec91df1408b9b2d2602d48afc41. Mar 4 01:16:09.052802 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 4 01:16:09.059345 containerd[1451]: time="2026-03-04T01:16:09.059296178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rp6vm,Uid:d58ab700-4803-4946-979d-c4041b91337a,Namespace:kube-system,Attempt:0,} returns sandbox id \"59982f3f3f620076de54bc3694ddf423dcb68cb30274f5a09e638338165ab168\"" Mar 4 01:16:09.060059 kubelet[2503]: E0304 01:16:09.060034 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:09.065598 containerd[1451]: time="2026-03-04T01:16:09.065538159Z" level=info msg="CreateContainer within sandbox \"59982f3f3f620076de54bc3694ddf423dcb68cb30274f5a09e638338165ab168\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:16:09.085240 containerd[1451]: time="2026-03-04T01:16:09.085100793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-s4j9g,Uid:3575bd71-7391-441b-a942-e6deba657dbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5725d77c90853f0d42b90d313f28b06002397ec91df1408b9b2d2602d48afc41\"" Mar 4 01:16:09.085961 kubelet[2503]: E0304 01:16:09.085915 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:09.091365 containerd[1451]: time="2026-03-04T01:16:09.091307945Z" level=info msg="CreateContainer within sandbox \"5725d77c90853f0d42b90d313f28b06002397ec91df1408b9b2d2602d48afc41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 01:16:09.092335 containerd[1451]: time="2026-03-04T01:16:09.092253798Z" level=info msg="CreateContainer within sandbox \"59982f3f3f620076de54bc3694ddf423dcb68cb30274f5a09e638338165ab168\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8b87bf0d4ac705e177c682398f5356157e92e41e6bab838ff72a46e57524eac\"" Mar 4 01:16:09.092832 containerd[1451]: time="2026-03-04T01:16:09.092712586Z" level=info msg="StartContainer for \"f8b87bf0d4ac705e177c682398f5356157e92e41e6bab838ff72a46e57524eac\"" Mar 4 01:16:09.109530 containerd[1451]: time="2026-03-04T01:16:09.109501419Z" level=info msg="CreateContainer within sandbox \"5725d77c90853f0d42b90d313f28b06002397ec91df1408b9b2d2602d48afc41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e1b0750e987124fd36468cbd00c0e913c0f5380b1e8123a9bca02a70dc7f5860\"" Mar 4 01:16:09.111551 containerd[1451]: time="2026-03-04T01:16:09.111049531Z" level=info msg="StartContainer for \"e1b0750e987124fd36468cbd00c0e913c0f5380b1e8123a9bca02a70dc7f5860\"" Mar 4 01:16:09.125912 systemd[1]: Started cri-containerd-f8b87bf0d4ac705e177c682398f5356157e92e41e6bab838ff72a46e57524eac.scope - libcontainer container f8b87bf0d4ac705e177c682398f5356157e92e41e6bab838ff72a46e57524eac. Mar 4 01:16:09.155962 systemd[1]: Started cri-containerd-e1b0750e987124fd36468cbd00c0e913c0f5380b1e8123a9bca02a70dc7f5860.scope - libcontainer container e1b0750e987124fd36468cbd00c0e913c0f5380b1e8123a9bca02a70dc7f5860. Mar 4 01:16:09.169670 containerd[1451]: time="2026-03-04T01:16:09.169561559Z" level=info msg="StartContainer for \"f8b87bf0d4ac705e177c682398f5356157e92e41e6bab838ff72a46e57524eac\" returns successfully" Mar 4 01:16:09.195262 containerd[1451]: time="2026-03-04T01:16:09.195138486Z" level=info msg="StartContainer for \"e1b0750e987124fd36468cbd00c0e913c0f5380b1e8123a9bca02a70dc7f5860\" returns successfully" Mar 4 01:16:09.341482 kubelet[2503]: E0304 01:16:09.341405 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:09.344660 kubelet[2503]: E0304 01:16:09.344432 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:09.354358 kubelet[2503]: I0304 01:16:09.354249 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-rp6vm" podStartSLOduration=25.354238204 podStartE2EDuration="25.354238204s" podCreationTimestamp="2026-03-04 01:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:16:09.354108006 +0000 UTC m=+30.542798841" watchObservedRunningTime="2026-03-04 01:16:09.354238204 +0000 UTC m=+30.542929039" Mar 4 01:16:09.379654 kubelet[2503]: I0304 01:16:09.379562 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-s4j9g" podStartSLOduration=25.379547806 podStartE2EDuration="25.379547806s" podCreationTimestamp="2026-03-04 01:15:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:16:09.365218618 +0000 UTC m=+30.553909453" watchObservedRunningTime="2026-03-04 01:16:09.379547806 +0000 UTC m=+30.568238641" Mar 4 01:16:09.974079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363882347.mount: Deactivated successfully. Mar 4 01:16:10.347018 kubelet[2503]: E0304 01:16:10.346926 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:10.347018 kubelet[2503]: E0304 01:16:10.346955 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:11.349460 kubelet[2503]: E0304 01:16:11.349410 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:11.349460 kubelet[2503]: E0304 01:16:11.349410 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:11.659671 kubelet[2503]: I0304 01:16:11.659464 2503 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 4 01:16:11.660417 kubelet[2503]: E0304 01:16:11.659949 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:12.351082 kubelet[2503]: E0304 01:16:12.351030 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:16:13.892519 systemd[1]: Started sshd@8-10.0.0.125:22-10.0.0.1:58492.service - OpenSSH per-connection server daemon (10.0.0.1:58492). Mar 4 01:16:13.933796 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 58492 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:13.935566 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:13.941680 systemd-logind[1443]: New session 9 of user core. Mar 4 01:16:13.949942 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 01:16:14.225988 sshd[3936]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:14.230595 systemd[1]: sshd@8-10.0.0.125:22-10.0.0.1:58492.service: Deactivated successfully. Mar 4 01:16:14.232733 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 01:16:14.233962 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Mar 4 01:16:14.235551 systemd-logind[1443]: Removed session 9. Mar 4 01:16:19.246948 systemd[1]: Started sshd@9-10.0.0.125:22-10.0.0.1:46378.service - OpenSSH per-connection server daemon (10.0.0.1:46378). Mar 4 01:16:19.288562 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:19.290378 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:19.294959 systemd-logind[1443]: New session 10 of user core. Mar 4 01:16:19.304905 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 01:16:19.418964 sshd[3954]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:19.423283 systemd[1]: sshd@9-10.0.0.125:22-10.0.0.1:46378.service: Deactivated successfully. Mar 4 01:16:19.425156 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 01:16:19.426551 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Mar 4 01:16:19.428268 systemd-logind[1443]: Removed session 10. Mar 4 01:16:24.455393 systemd[1]: Started sshd@10-10.0.0.125:22-10.0.0.1:46390.service - OpenSSH per-connection server daemon (10.0.0.1:46390). Mar 4 01:16:24.483946 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 46390 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:24.485551 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:24.490284 systemd-logind[1443]: New session 11 of user core. Mar 4 01:16:24.500949 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 01:16:24.609958 sshd[3969]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:24.613979 systemd[1]: sshd@10-10.0.0.125:22-10.0.0.1:46390.service: Deactivated successfully. Mar 4 01:16:24.616311 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 01:16:24.617154 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Mar 4 01:16:24.618613 systemd-logind[1443]: Removed session 11. Mar 4 01:16:29.620918 systemd[1]: Started sshd@11-10.0.0.125:22-10.0.0.1:38430.service - OpenSSH per-connection server daemon (10.0.0.1:38430). Mar 4 01:16:29.655614 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 38430 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:29.657366 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:29.662983 systemd-logind[1443]: New session 12 of user core. Mar 4 01:16:29.669942 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 01:16:29.805198 sshd[3984]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:29.810179 systemd[1]: sshd@11-10.0.0.125:22-10.0.0.1:38430.service: Deactivated successfully. Mar 4 01:16:29.813225 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 01:16:29.814519 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Mar 4 01:16:29.816056 systemd-logind[1443]: Removed session 12. Mar 4 01:16:34.817810 systemd[1]: Started sshd@12-10.0.0.125:22-10.0.0.1:38444.service - OpenSSH per-connection server daemon (10.0.0.1:38444). Mar 4 01:16:34.857266 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 38444 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:34.858973 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:34.864800 systemd-logind[1443]: New session 13 of user core. Mar 4 01:16:34.876060 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 01:16:34.991721 sshd[3999]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:34.995988 systemd[1]: sshd@12-10.0.0.125:22-10.0.0.1:38444.service: Deactivated successfully. Mar 4 01:16:34.997954 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 01:16:34.999137 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Mar 4 01:16:35.001002 systemd-logind[1443]: Removed session 13. Mar 4 01:16:40.005325 systemd[1]: Started sshd@13-10.0.0.125:22-10.0.0.1:33950.service - OpenSSH per-connection server daemon (10.0.0.1:33950). Mar 4 01:16:40.049670 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 33950 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:40.052201 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:40.059479 systemd-logind[1443]: New session 14 of user core. Mar 4 01:16:40.066033 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 01:16:40.213808 sshd[4016]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:40.218113 systemd[1]: sshd@13-10.0.0.125:22-10.0.0.1:33950.service: Deactivated successfully. Mar 4 01:16:40.220516 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 01:16:40.221451 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Mar 4 01:16:40.223619 systemd-logind[1443]: Removed session 14. Mar 4 01:16:45.225357 systemd[1]: Started sshd@14-10.0.0.125:22-10.0.0.1:33962.service - OpenSSH per-connection server daemon (10.0.0.1:33962). Mar 4 01:16:45.265668 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 33962 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:45.267617 sshd[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:45.272676 systemd-logind[1443]: New session 15 of user core. Mar 4 01:16:45.280923 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 01:16:45.392093 sshd[4031]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:45.396686 systemd[1]: sshd@14-10.0.0.125:22-10.0.0.1:33962.service: Deactivated successfully. Mar 4 01:16:45.399048 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 01:16:45.400542 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Mar 4 01:16:45.402042 systemd-logind[1443]: Removed session 15. Mar 4 01:16:50.410070 systemd[1]: Started sshd@15-10.0.0.125:22-10.0.0.1:59156.service - OpenSSH per-connection server daemon (10.0.0.1:59156). Mar 4 01:16:50.448657 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 59156 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:50.450259 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:50.456419 systemd-logind[1443]: New session 16 of user core. Mar 4 01:16:50.466024 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 01:16:50.592031 sshd[4048]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:50.596538 systemd[1]: sshd@15-10.0.0.125:22-10.0.0.1:59156.service: Deactivated successfully. Mar 4 01:16:50.600271 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 01:16:50.601990 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Mar 4 01:16:50.603940 systemd-logind[1443]: Removed session 16. Mar 4 01:16:55.630211 systemd[1]: Started sshd@16-10.0.0.125:22-10.0.0.1:59164.service - OpenSSH per-connection server daemon (10.0.0.1:59164). Mar 4 01:16:55.671660 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 59164 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:16:55.673923 sshd[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:16:55.680976 systemd-logind[1443]: New session 17 of user core. Mar 4 01:16:55.689084 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 01:16:55.833237 sshd[4064]: pam_unix(sshd:session): session closed for user core Mar 4 01:16:55.838497 systemd[1]: sshd@16-10.0.0.125:22-10.0.0.1:59164.service: Deactivated successfully. Mar 4 01:16:55.841286 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 01:16:55.843424 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Mar 4 01:16:55.845160 systemd-logind[1443]: Removed session 17. Mar 4 01:17:00.845827 systemd[1]: Started sshd@17-10.0.0.125:22-10.0.0.1:52304.service - OpenSSH per-connection server daemon (10.0.0.1:52304). Mar 4 01:17:00.896039 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 52304 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:00.898436 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:00.904127 systemd-logind[1443]: New session 18 of user core. Mar 4 01:17:00.914020 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 01:17:01.043491 sshd[4079]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:01.048411 systemd[1]: sshd@17-10.0.0.125:22-10.0.0.1:52304.service: Deactivated successfully. Mar 4 01:17:01.051004 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 01:17:01.052159 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Mar 4 01:17:01.053535 systemd-logind[1443]: Removed session 18. Mar 4 01:17:06.060355 systemd[1]: Started sshd@18-10.0.0.125:22-10.0.0.1:52316.service - OpenSSH per-connection server daemon (10.0.0.1:52316). Mar 4 01:17:06.117385 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:06.119400 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:06.125723 systemd-logind[1443]: New session 19 of user core. Mar 4 01:17:06.137991 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 01:17:06.438881 sshd[4094]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:06.455397 systemd[1]: sshd@18-10.0.0.125:22-10.0.0.1:52316.service: Deactivated successfully. Mar 4 01:17:06.458001 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 01:17:06.460116 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Mar 4 01:17:06.468323 systemd[1]: Started sshd@19-10.0.0.125:22-10.0.0.1:52328.service - OpenSSH per-connection server daemon (10.0.0.1:52328). Mar 4 01:17:06.470230 systemd-logind[1443]: Removed session 19. Mar 4 01:17:06.504544 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 52328 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:06.506853 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:06.513482 systemd-logind[1443]: New session 20 of user core. Mar 4 01:17:06.528016 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 01:17:06.705972 sshd[4112]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:06.718379 systemd[1]: sshd@19-10.0.0.125:22-10.0.0.1:52328.service: Deactivated successfully. Mar 4 01:17:06.721604 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 01:17:06.724010 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Mar 4 01:17:06.738401 systemd[1]: Started sshd@20-10.0.0.125:22-10.0.0.1:52330.service - OpenSSH per-connection server daemon (10.0.0.1:52330). Mar 4 01:17:06.741868 systemd-logind[1443]: Removed session 20. Mar 4 01:17:06.775689 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 52330 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:06.778341 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:06.786018 systemd-logind[1443]: New session 21 of user core. Mar 4 01:17:06.794081 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 01:17:06.915321 sshd[4124]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:06.918814 systemd[1]: sshd@20-10.0.0.125:22-10.0.0.1:52330.service: Deactivated successfully. Mar 4 01:17:06.920948 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 01:17:06.922669 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Mar 4 01:17:06.924381 systemd-logind[1443]: Removed session 21. Mar 4 01:17:06.962901 kubelet[2503]: E0304 01:17:06.960695 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:06.962901 kubelet[2503]: E0304 01:17:06.961053 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:11.931711 systemd[1]: Started sshd@21-10.0.0.125:22-10.0.0.1:33556.service - OpenSSH per-connection server daemon (10.0.0.1:33556). Mar 4 01:17:11.960717 kubelet[2503]: E0304 01:17:11.960655 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:11.960717 kubelet[2503]: E0304 01:17:11.960679 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:11.970717 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 33556 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:11.971856 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:11.978669 systemd-logind[1443]: New session 22 of user core. Mar 4 01:17:11.996328 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 01:17:12.129572 sshd[4138]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:12.135627 systemd[1]: sshd@21-10.0.0.125:22-10.0.0.1:33556.service: Deactivated successfully. Mar 4 01:17:12.138332 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 01:17:12.139562 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Mar 4 01:17:12.141736 systemd-logind[1443]: Removed session 22. Mar 4 01:17:13.962642 kubelet[2503]: E0304 01:17:13.960069 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:17.175972 systemd[1]: Started sshd@22-10.0.0.125:22-10.0.0.1:33566.service - OpenSSH per-connection server daemon (10.0.0.1:33566). Mar 4 01:17:17.247133 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 33566 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:17.253320 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:17.273854 systemd-logind[1443]: New session 23 of user core. Mar 4 01:17:17.280405 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 01:17:17.534012 sshd[4156]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:17.549219 systemd[1]: sshd@22-10.0.0.125:22-10.0.0.1:33566.service: Deactivated successfully. Mar 4 01:17:17.561560 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 01:17:17.567055 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Mar 4 01:17:17.571690 systemd-logind[1443]: Removed session 23. Mar 4 01:17:22.583173 systemd[1]: Started sshd@23-10.0.0.125:22-10.0.0.1:50686.service - OpenSSH per-connection server daemon (10.0.0.1:50686). Mar 4 01:17:22.704386 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 50686 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:22.707999 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:22.748583 systemd-logind[1443]: New session 24 of user core. Mar 4 01:17:22.760583 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 01:17:23.061874 sshd[4172]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:23.072438 systemd[1]: sshd@23-10.0.0.125:22-10.0.0.1:50686.service: Deactivated successfully. Mar 4 01:17:23.076842 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 01:17:23.079185 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Mar 4 01:17:23.083377 systemd-logind[1443]: Removed session 24. Mar 4 01:17:23.960355 kubelet[2503]: E0304 01:17:23.960189 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:24.965536 kubelet[2503]: E0304 01:17:24.963151 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:27.961557 kubelet[2503]: E0304 01:17:27.961428 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:28.076337 systemd[1]: Started sshd@24-10.0.0.125:22-10.0.0.1:50688.service - OpenSSH per-connection server daemon (10.0.0.1:50688). Mar 4 01:17:28.150715 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 50688 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:28.154055 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:28.169531 systemd-logind[1443]: New session 25 of user core. Mar 4 01:17:28.179231 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 01:17:28.354661 sshd[4186]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:28.362086 systemd[1]: sshd@24-10.0.0.125:22-10.0.0.1:50688.service: Deactivated successfully. Mar 4 01:17:28.365343 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 01:17:28.367000 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Mar 4 01:17:28.369339 systemd-logind[1443]: Removed session 25. Mar 4 01:17:33.404888 systemd[1]: Started sshd@25-10.0.0.125:22-10.0.0.1:34584.service - OpenSSH per-connection server daemon (10.0.0.1:34584). Mar 4 01:17:33.577471 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 34584 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:33.586939 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:33.634976 systemd-logind[1443]: New session 26 of user core. Mar 4 01:17:33.649205 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 01:17:34.029392 sshd[4200]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:34.051498 systemd[1]: sshd@25-10.0.0.125:22-10.0.0.1:34584.service: Deactivated successfully. Mar 4 01:17:34.056450 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 01:17:34.059367 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Mar 4 01:17:34.075604 systemd[1]: Started sshd@26-10.0.0.125:22-10.0.0.1:34596.service - OpenSSH per-connection server daemon (10.0.0.1:34596). Mar 4 01:17:34.081657 systemd-logind[1443]: Removed session 26. Mar 4 01:17:34.157642 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 34596 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:34.163449 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:34.177427 systemd-logind[1443]: New session 27 of user core. Mar 4 01:17:34.196381 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 01:17:35.011630 sshd[4215]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:35.026933 systemd[1]: sshd@26-10.0.0.125:22-10.0.0.1:34596.service: Deactivated successfully. Mar 4 01:17:35.030179 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 01:17:35.033356 systemd-logind[1443]: Session 27 logged out. Waiting for processes to exit. Mar 4 01:17:35.042660 systemd[1]: Started sshd@27-10.0.0.125:22-10.0.0.1:34610.service - OpenSSH per-connection server daemon (10.0.0.1:34610). Mar 4 01:17:35.047092 systemd-logind[1443]: Removed session 27. Mar 4 01:17:35.195615 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 34610 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:35.196454 sshd[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:35.214388 systemd-logind[1443]: New session 28 of user core. Mar 4 01:17:35.226422 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 4 01:17:36.603401 sshd[4229]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:36.627008 systemd[1]: sshd@27-10.0.0.125:22-10.0.0.1:34610.service: Deactivated successfully. Mar 4 01:17:36.630986 systemd[1]: session-28.scope: Deactivated successfully. Mar 4 01:17:36.640226 systemd-logind[1443]: Session 28 logged out. Waiting for processes to exit. Mar 4 01:17:36.659504 systemd[1]: Started sshd@28-10.0.0.125:22-10.0.0.1:34624.service - OpenSSH per-connection server daemon (10.0.0.1:34624). Mar 4 01:17:36.664204 systemd-logind[1443]: Removed session 28. Mar 4 01:17:36.734630 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:36.741240 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:36.764571 systemd-logind[1443]: New session 29 of user core. Mar 4 01:17:36.774533 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 4 01:17:37.184435 sshd[4250]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:37.208943 systemd[1]: sshd@28-10.0.0.125:22-10.0.0.1:34624.service: Deactivated successfully. Mar 4 01:17:37.221105 systemd[1]: session-29.scope: Deactivated successfully. Mar 4 01:17:37.229884 systemd-logind[1443]: Session 29 logged out. Waiting for processes to exit. Mar 4 01:17:37.241571 systemd[1]: Started sshd@29-10.0.0.125:22-10.0.0.1:34640.service - OpenSSH per-connection server daemon (10.0.0.1:34640). Mar 4 01:17:37.244577 systemd-logind[1443]: Removed session 29. Mar 4 01:17:37.302568 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 34640 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:37.309244 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:37.324626 systemd-logind[1443]: New session 30 of user core. Mar 4 01:17:37.333226 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 4 01:17:37.544427 sshd[4262]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:37.552940 systemd[1]: sshd@29-10.0.0.125:22-10.0.0.1:34640.service: Deactivated successfully. Mar 4 01:17:37.556713 systemd[1]: session-30.scope: Deactivated successfully. Mar 4 01:17:37.559126 systemd-logind[1443]: Session 30 logged out. Waiting for processes to exit. Mar 4 01:17:37.562996 systemd-logind[1443]: Removed session 30. Mar 4 01:17:42.570368 systemd[1]: Started sshd@30-10.0.0.125:22-10.0.0.1:33258.service - OpenSSH per-connection server daemon (10.0.0.1:33258). Mar 4 01:17:42.610704 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 33258 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:42.612957 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:42.625491 systemd-logind[1443]: New session 31 of user core. Mar 4 01:17:42.635366 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 4 01:17:42.901851 sshd[4280]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:42.909952 systemd[1]: sshd@30-10.0.0.125:22-10.0.0.1:33258.service: Deactivated successfully. Mar 4 01:17:42.913600 systemd[1]: session-31.scope: Deactivated successfully. Mar 4 01:17:42.915185 systemd-logind[1443]: Session 31 logged out. Waiting for processes to exit. Mar 4 01:17:42.917662 systemd-logind[1443]: Removed session 31. Mar 4 01:17:47.938487 systemd[1]: Started sshd@31-10.0.0.125:22-10.0.0.1:33272.service - OpenSSH per-connection server daemon (10.0.0.1:33272). Mar 4 01:17:47.973070 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 33272 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:47.975643 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:47.985402 systemd-logind[1443]: New session 32 of user core. Mar 4 01:17:48.000124 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 4 01:17:48.153021 sshd[4298]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:48.159020 systemd[1]: sshd@31-10.0.0.125:22-10.0.0.1:33272.service: Deactivated successfully. Mar 4 01:17:48.162152 systemd[1]: session-32.scope: Deactivated successfully. Mar 4 01:17:48.163598 systemd-logind[1443]: Session 32 logged out. Waiting for processes to exit. Mar 4 01:17:48.165281 systemd-logind[1443]: Removed session 32. Mar 4 01:17:53.168638 systemd[1]: Started sshd@32-10.0.0.125:22-10.0.0.1:56624.service - OpenSSH per-connection server daemon (10.0.0.1:56624). Mar 4 01:17:53.220178 sshd[4312]: Accepted publickey for core from 10.0.0.1 port 56624 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:53.223047 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:53.231122 systemd-logind[1443]: New session 33 of user core. Mar 4 01:17:53.243209 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 4 01:17:53.391174 sshd[4312]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:53.404647 systemd[1]: sshd@32-10.0.0.125:22-10.0.0.1:56624.service: Deactivated successfully. Mar 4 01:17:53.407030 systemd[1]: session-33.scope: Deactivated successfully. Mar 4 01:17:53.409267 systemd-logind[1443]: Session 33 logged out. Waiting for processes to exit. Mar 4 01:17:53.417182 systemd[1]: Started sshd@33-10.0.0.125:22-10.0.0.1:56626.service - OpenSSH per-connection server daemon (10.0.0.1:56626). Mar 4 01:17:53.419011 systemd-logind[1443]: Removed session 33. Mar 4 01:17:53.459541 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 56626 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:53.461692 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:53.468530 systemd-logind[1443]: New session 34 of user core. Mar 4 01:17:53.479013 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 4 01:17:54.978584 containerd[1451]: time="2026-03-04T01:17:54.975870771Z" level=info msg="StopContainer for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" with timeout 30 (s)" Mar 4 01:17:54.979559 containerd[1451]: time="2026-03-04T01:17:54.978569898Z" level=info msg="Stop container \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" with signal terminated" Mar 4 01:17:55.021600 systemd[1]: run-containerd-runc-k8s.io-7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8-runc.kPjXoZ.mount: Deactivated successfully. Mar 4 01:17:55.037621 systemd[1]: cri-containerd-5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a.scope: Deactivated successfully. Mar 4 01:17:55.067524 containerd[1451]: time="2026-03-04T01:17:55.067385457Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 01:17:55.073058 containerd[1451]: time="2026-03-04T01:17:55.072965087Z" level=info msg="StopContainer for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" with timeout 2 (s)" Mar 4 01:17:55.083274 containerd[1451]: time="2026-03-04T01:17:55.083058715Z" level=info msg="Stop container \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" with signal terminated" Mar 4 01:17:55.107068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a-rootfs.mount: Deactivated successfully. Mar 4 01:17:55.121625 systemd-networkd[1388]: lxc_health: Link DOWN Mar 4 01:17:55.121636 systemd-networkd[1388]: lxc_health: Lost carrier Mar 4 01:17:55.140192 containerd[1451]: time="2026-03-04T01:17:55.139943211Z" level=info msg="shim disconnected" id=5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a namespace=k8s.io Mar 4 01:17:55.140192 containerd[1451]: time="2026-03-04T01:17:55.140029642Z" level=warning msg="cleaning up after shim disconnected" id=5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a namespace=k8s.io Mar 4 01:17:55.140192 containerd[1451]: time="2026-03-04T01:17:55.140046433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:17:55.149137 systemd[1]: cri-containerd-7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8.scope: Deactivated successfully. Mar 4 01:17:55.149729 systemd[1]: cri-containerd-7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8.scope: Consumed 9.074s CPU time. Mar 4 01:17:55.182421 containerd[1451]: time="2026-03-04T01:17:55.182217313Z" level=info msg="StopContainer for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" returns successfully" Mar 4 01:17:55.183442 containerd[1451]: time="2026-03-04T01:17:55.183397961Z" level=info msg="StopPodSandbox for \"1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb\"" Mar 4 01:17:55.187666 containerd[1451]: time="2026-03-04T01:17:55.183951202Z" level=info msg="Container to stop \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:17:55.186884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb-shm.mount: Deactivated successfully. Mar 4 01:17:55.201956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8-rootfs.mount: Deactivated successfully. Mar 4 01:17:55.203234 systemd[1]: cri-containerd-1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb.scope: Deactivated successfully. Mar 4 01:17:55.218099 containerd[1451]: time="2026-03-04T01:17:55.217735919Z" level=info msg="shim disconnected" id=7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8 namespace=k8s.io Mar 4 01:17:55.218099 containerd[1451]: time="2026-03-04T01:17:55.217867224Z" level=warning msg="cleaning up after shim disconnected" id=7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8 namespace=k8s.io Mar 4 01:17:55.218099 containerd[1451]: time="2026-03-04T01:17:55.217877232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:17:55.257996 containerd[1451]: time="2026-03-04T01:17:55.257897224Z" level=info msg="StopContainer for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" returns successfully" Mar 4 01:17:55.258820 containerd[1451]: time="2026-03-04T01:17:55.258447789Z" level=info msg="StopPodSandbox for \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\"" Mar 4 01:17:55.258820 containerd[1451]: time="2026-03-04T01:17:55.258491190Z" level=info msg="Container to stop \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:17:55.258820 containerd[1451]: time="2026-03-04T01:17:55.258503473Z" level=info msg="Container to stop \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:17:55.258820 containerd[1451]: time="2026-03-04T01:17:55.258512720Z" level=info msg="Container to stop \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:17:55.258820 containerd[1451]: time="2026-03-04T01:17:55.258523129Z" level=info msg="Container to stop \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:17:55.258820 containerd[1451]: time="2026-03-04T01:17:55.258531224Z" level=info msg="Container to stop \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 01:17:55.260659 containerd[1451]: time="2026-03-04T01:17:55.260489978Z" level=info msg="shim disconnected" id=1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb namespace=k8s.io Mar 4 01:17:55.260659 containerd[1451]: time="2026-03-04T01:17:55.260539369Z" level=warning msg="cleaning up after shim disconnected" id=1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb namespace=k8s.io Mar 4 01:17:55.260659 containerd[1451]: time="2026-03-04T01:17:55.260552603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:17:55.270922 systemd[1]: cri-containerd-e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5.scope: Deactivated successfully. Mar 4 01:17:55.291492 containerd[1451]: time="2026-03-04T01:17:55.291392555Z" level=info msg="TearDown network for sandbox \"1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb\" successfully" Mar 4 01:17:55.291492 containerd[1451]: time="2026-03-04T01:17:55.291471572Z" level=info msg="StopPodSandbox for \"1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb\" returns successfully" Mar 4 01:17:55.311119 containerd[1451]: time="2026-03-04T01:17:55.310732109Z" level=info msg="shim disconnected" id=e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5 namespace=k8s.io Mar 4 01:17:55.311119 containerd[1451]: time="2026-03-04T01:17:55.310852191Z" level=warning msg="cleaning up after shim disconnected" id=e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5 namespace=k8s.io Mar 4 01:17:55.311119 containerd[1451]: time="2026-03-04T01:17:55.310866509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:17:55.338578 containerd[1451]: time="2026-03-04T01:17:55.338456965Z" level=info msg="TearDown network for sandbox \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" successfully" Mar 4 01:17:55.338578 containerd[1451]: time="2026-03-04T01:17:55.338544288Z" level=info msg="StopPodSandbox for \"e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5\" returns successfully" Mar 4 01:17:55.456917 kubelet[2503]: I0304 01:17:55.456717 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-bpf-maps\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.456917 kubelet[2503]: I0304 01:17:55.456860 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-bpf-maps" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.458062 kubelet[2503]: I0304 01:17:55.456961 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-config-path\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458062 kubelet[2503]: I0304 01:17:55.457011 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-xtables-lock\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458062 kubelet[2503]: I0304 01:17:55.457053 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/580c0bac-6b83-42cc-830e-0548fc4ffa5e-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/580c0bac-6b83-42cc-830e-0548fc4ffa5e-clustermesh-secrets\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458062 kubelet[2503]: I0304 01:17:55.457112 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-run\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-run\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458062 kubelet[2503]: I0304 01:17:55.457129 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-cgroup\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458307 kubelet[2503]: I0304 01:17:55.457147 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-etc-cni-netd\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458307 kubelet[2503]: I0304 01:17:55.457162 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-lib-modules\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-lib-modules\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458307 kubelet[2503]: I0304 01:17:55.457182 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-kube-api-access-g8bxv\" (UniqueName: \"kubernetes.io/projected/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-kube-api-access-g8bxv\") pod \"1704a8ea-1822-4a66-9f90-ea5d365e8ad1\" (UID: \"1704a8ea-1822-4a66-9f90-ea5d365e8ad1\") " Mar 4 01:17:55.458307 kubelet[2503]: I0304 01:17:55.457205 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hubble-tls\" (UniqueName: \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hubble-tls\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458307 kubelet[2503]: I0304 01:17:55.457222 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-kernel\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458554 kubelet[2503]: I0304 01:17:55.457236 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hostproc\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hostproc\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458554 kubelet[2503]: I0304 01:17:55.457258 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-cilium-config-path\") pod \"1704a8ea-1822-4a66-9f90-ea5d365e8ad1\" (UID: \"1704a8ea-1822-4a66-9f90-ea5d365e8ad1\") " Mar 4 01:17:55.458554 kubelet[2503]: I0304 01:17:55.457274 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-net\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458554 kubelet[2503]: I0304 01:17:55.457291 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-kube-api-access-47jdk\" (UniqueName: \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-kube-api-access-47jdk\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458554 kubelet[2503]: I0304 01:17:55.457306 2503 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cni-path\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cni-path\") pod \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\" (UID: \"580c0bac-6b83-42cc-830e-0548fc4ffa5e\") " Mar 4 01:17:55.458844 kubelet[2503]: I0304 01:17:55.457408 2503 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.458844 kubelet[2503]: I0304 01:17:55.457438 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cni-path" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.458844 kubelet[2503]: I0304 01:17:55.457448 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-lib-modules" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.458844 kubelet[2503]: I0304 01:17:55.458004 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-xtables-lock" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.458844 kubelet[2503]: I0304 01:17:55.458146 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-cgroup" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.458844 kubelet[2503]: I0304 01:17:55.458179 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-run" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.459099 kubelet[2503]: I0304 01:17:55.458205 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hostproc" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.462033 kubelet[2503]: I0304 01:17:55.461973 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-config-path" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:17:55.462726 kubelet[2503]: I0304 01:17:55.462212 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-kernel" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.462726 kubelet[2503]: I0304 01:17:55.462243 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-net" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.462726 kubelet[2503]: I0304 01:17:55.462153 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-etc-cni-netd" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 01:17:55.466012 kubelet[2503]: I0304 01:17:55.465984 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-cilium-config-path" pod "1704a8ea-1822-4a66-9f90-ea5d365e8ad1" (UID: "1704a8ea-1822-4a66-9f90-ea5d365e8ad1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 01:17:55.468508 kubelet[2503]: I0304 01:17:55.468412 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-kube-api-access-g8bxv" pod "1704a8ea-1822-4a66-9f90-ea5d365e8ad1" (UID: "1704a8ea-1822-4a66-9f90-ea5d365e8ad1"). InnerVolumeSpecName "kube-api-access-g8bxv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:17:55.469248 kubelet[2503]: I0304 01:17:55.469181 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hubble-tls" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:17:55.469634 kubelet[2503]: I0304 01:17:55.469545 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-kube-api-access-47jdk" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "kube-api-access-47jdk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 01:17:55.469710 kubelet[2503]: I0304 01:17:55.469553 2503 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580c0bac-6b83-42cc-830e-0548fc4ffa5e-clustermesh-secrets" pod "580c0bac-6b83-42cc-830e-0548fc4ffa5e" (UID: "580c0bac-6b83-42cc-830e-0548fc4ffa5e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558128 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558206 2503 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558217 2503 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/580c0bac-6b83-42cc-830e-0548fc4ffa5e-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558226 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558235 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558243 2503 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558251 2503 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.558510 kubelet[2503]: I0304 01:17:55.558260 2503 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g8bxv\" (UniqueName: \"kubernetes.io/projected/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-kube-api-access-g8bxv\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558268 2503 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558277 2503 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558286 2503 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558295 2503 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1704a8ea-1822-4a66-9f90-ea5d365e8ad1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558303 2503 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558310 2503 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47jdk\" (UniqueName: \"kubernetes.io/projected/580c0bac-6b83-42cc-830e-0548fc4ffa5e-kube-api-access-47jdk\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.559124 kubelet[2503]: I0304 01:17:55.558362 2503 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/580c0bac-6b83-42cc-830e-0548fc4ffa5e-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 4 01:17:55.762607 kubelet[2503]: I0304 01:17:55.762569 2503 scope.go:122] "RemoveContainer" containerID="7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8" Mar 4 01:17:55.765703 containerd[1451]: time="2026-03-04T01:17:55.765562422Z" level=info msg="RemoveContainer for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\"" Mar 4 01:17:55.775638 systemd[1]: Removed slice kubepods-burstable-pod580c0bac_6b83_42cc_830e_0548fc4ffa5e.slice - libcontainer container kubepods-burstable-pod580c0bac_6b83_42cc_830e_0548fc4ffa5e.slice. Mar 4 01:17:55.776386 systemd[1]: kubepods-burstable-pod580c0bac_6b83_42cc_830e_0548fc4ffa5e.slice: Consumed 9.237s CPU time. Mar 4 01:17:55.777293 containerd[1451]: time="2026-03-04T01:17:55.777222114Z" level=info msg="RemoveContainer for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" returns successfully" Mar 4 01:17:55.778051 kubelet[2503]: I0304 01:17:55.777928 2503 scope.go:122] "RemoveContainer" containerID="6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176" Mar 4 01:17:55.779195 systemd[1]: Removed slice kubepods-besteffort-pod1704a8ea_1822_4a66_9f90_ea5d365e8ad1.slice - libcontainer container kubepods-besteffort-pod1704a8ea_1822_4a66_9f90_ea5d365e8ad1.slice. Mar 4 01:17:55.780398 containerd[1451]: time="2026-03-04T01:17:55.780290521Z" level=info msg="RemoveContainer for \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\"" Mar 4 01:17:55.787201 containerd[1451]: time="2026-03-04T01:17:55.786969289Z" level=info msg="RemoveContainer for \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\" returns successfully" Mar 4 01:17:55.787572 kubelet[2503]: I0304 01:17:55.787463 2503 scope.go:122] "RemoveContainer" containerID="8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452" Mar 4 01:17:55.789938 containerd[1451]: time="2026-03-04T01:17:55.789584720Z" level=info msg="RemoveContainer for \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\"" Mar 4 01:17:55.797572 containerd[1451]: time="2026-03-04T01:17:55.797252169Z" level=info msg="RemoveContainer for \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\" returns successfully" Mar 4 01:17:55.800120 kubelet[2503]: I0304 01:17:55.797726 2503 scope.go:122] "RemoveContainer" containerID="5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897" Mar 4 01:17:55.801740 containerd[1451]: time="2026-03-04T01:17:55.801425864Z" level=info msg="RemoveContainer for \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\"" Mar 4 01:17:55.807272 containerd[1451]: time="2026-03-04T01:17:55.807201287Z" level=info msg="RemoveContainer for \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\" returns successfully" Mar 4 01:17:55.808124 kubelet[2503]: I0304 01:17:55.807866 2503 scope.go:122] "RemoveContainer" containerID="a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3" Mar 4 01:17:55.810070 containerd[1451]: time="2026-03-04T01:17:55.809909684Z" level=info msg="RemoveContainer for \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\"" Mar 4 01:17:55.818413 containerd[1451]: time="2026-03-04T01:17:55.818282895Z" level=info msg="RemoveContainer for \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\" returns successfully" Mar 4 01:17:55.818690 kubelet[2503]: I0304 01:17:55.818659 2503 scope.go:122] "RemoveContainer" containerID="7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8" Mar 4 01:17:55.827886 containerd[1451]: time="2026-03-04T01:17:55.827619835Z" level=error msg="ContainerStatus for \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\": not found" Mar 4 01:17:55.842804 kubelet[2503]: E0304 01:17:55.842666 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\": not found" containerID="7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8" Mar 4 01:17:55.842973 kubelet[2503]: I0304 01:17:55.842834 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8"} err="failed to get container status \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\": rpc error: code = NotFound desc = an error occurred when try to find container \"7bf2dbd192e1a095a6cc557eb4d9ce57add8c0220eb80730ddf5a557f3f2ace8\": not found" Mar 4 01:17:55.842973 kubelet[2503]: I0304 01:17:55.842885 2503 scope.go:122] "RemoveContainer" containerID="6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176" Mar 4 01:17:55.843506 containerd[1451]: time="2026-03-04T01:17:55.843374383Z" level=error msg="ContainerStatus for \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\": not found" Mar 4 01:17:55.843701 kubelet[2503]: E0304 01:17:55.843587 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\": not found" containerID="6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176" Mar 4 01:17:55.843701 kubelet[2503]: I0304 01:17:55.843617 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176"} err="failed to get container status \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\": rpc error: code = NotFound desc = an error occurred when try to find container \"6fcc25a695474c05f7e6b7fb5e1ca5ec4803c3bcbe2591745ff8898d63eac176\": not found" Mar 4 01:17:55.843701 kubelet[2503]: I0304 01:17:55.843635 2503 scope.go:122] "RemoveContainer" containerID="8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452" Mar 4 01:17:55.844072 containerd[1451]: time="2026-03-04T01:17:55.843966325Z" level=error msg="ContainerStatus for \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\": not found" Mar 4 01:17:55.844400 kubelet[2503]: E0304 01:17:55.844227 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\": not found" containerID="8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452" Mar 4 01:17:55.844400 kubelet[2503]: I0304 01:17:55.844287 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452"} err="failed to get container status \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\": rpc error: code = NotFound desc = an error occurred when try to find container \"8bd19e3ee0d183b330de8c1cfce66f46f52d7e8417021f551bb08e89f563b452\": not found" Mar 4 01:17:55.844400 kubelet[2503]: I0304 01:17:55.844303 2503 scope.go:122] "RemoveContainer" containerID="5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897" Mar 4 01:17:55.844728 containerd[1451]: time="2026-03-04T01:17:55.844605045Z" level=error msg="ContainerStatus for \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\": not found" Mar 4 01:17:55.844902 kubelet[2503]: E0304 01:17:55.844844 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\": not found" containerID="5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897" Mar 4 01:17:55.844902 kubelet[2503]: I0304 01:17:55.844866 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897"} err="failed to get container status \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\": rpc error: code = NotFound desc = an error occurred when try to find container \"5a6f31578f58bb5ec9ed2b042b2a881a79609d429cb7d163c3ee4ca1557c7897\": not found" Mar 4 01:17:55.844902 kubelet[2503]: I0304 01:17:55.844880 2503 scope.go:122] "RemoveContainer" containerID="a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3" Mar 4 01:17:55.845053 containerd[1451]: time="2026-03-04T01:17:55.845024876Z" level=error msg="ContainerStatus for \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\": not found" Mar 4 01:17:55.845227 kubelet[2503]: E0304 01:17:55.845126 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\": not found" containerID="a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3" Mar 4 01:17:55.845227 kubelet[2503]: I0304 01:17:55.845147 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3"} err="failed to get container status \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6021d4843f7ea6709dc6dc0a9e8ced18746e1a8a39b6170ebe6ebdc47ccbdc3\": not found" Mar 4 01:17:55.845227 kubelet[2503]: I0304 01:17:55.845162 2503 scope.go:122] "RemoveContainer" containerID="5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a" Mar 4 01:17:55.846999 containerd[1451]: time="2026-03-04T01:17:55.846899354Z" level=info msg="RemoveContainer for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\"" Mar 4 01:17:55.852653 containerd[1451]: time="2026-03-04T01:17:55.852436866Z" level=info msg="RemoveContainer for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" returns successfully" Mar 4 01:17:55.853628 kubelet[2503]: I0304 01:17:55.853464 2503 scope.go:122] "RemoveContainer" containerID="5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a" Mar 4 01:17:55.854110 containerd[1451]: time="2026-03-04T01:17:55.853990800Z" level=error msg="ContainerStatus for \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\": not found" Mar 4 01:17:55.854447 kubelet[2503]: E0304 01:17:55.854255 2503 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\": not found" containerID="5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a" Mar 4 01:17:55.854447 kubelet[2503]: I0304 01:17:55.854382 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a"} err="failed to get container status \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e95cd21adda8cd3e09f9b2d48fed15fb816ac6fff147efe7e0ab2d5a2d7a91a\": not found" Mar 4 01:17:55.999657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aa9d2a6057f7cccd7847164da2362faf1fd31a3ef7d8702952871778790b1bb-rootfs.mount: Deactivated successfully. Mar 4 01:17:55.999900 systemd[1]: var-lib-kubelet-pods-1704a8ea\x2d1822\x2d4a66\x2d9f90\x2dea5d365e8ad1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg8bxv.mount: Deactivated successfully. Mar 4 01:17:56.000018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5-rootfs.mount: Deactivated successfully. Mar 4 01:17:56.000117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e69f214e53e773a1d6c8e7835b031dc912fec5d85336fe6c05c0c6f5d6f68fb5-shm.mount: Deactivated successfully. Mar 4 01:17:56.000198 systemd[1]: var-lib-kubelet-pods-580c0bac\x2d6b83\x2d42cc\x2d830e\x2d0548fc4ffa5e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 4 01:17:56.000300 systemd[1]: var-lib-kubelet-pods-580c0bac\x2d6b83\x2d42cc\x2d830e\x2d0548fc4ffa5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47jdk.mount: Deactivated successfully. Mar 4 01:17:56.000420 systemd[1]: var-lib-kubelet-pods-580c0bac\x2d6b83\x2d42cc\x2d830e\x2d0548fc4ffa5e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 4 01:17:56.805586 sshd[4326]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:56.817112 systemd[1]: sshd@33-10.0.0.125:22-10.0.0.1:56626.service: Deactivated successfully. Mar 4 01:17:56.819628 systemd[1]: session-34.scope: Deactivated successfully. Mar 4 01:17:56.821809 systemd-logind[1443]: Session 34 logged out. Waiting for processes to exit. Mar 4 01:17:56.830240 systemd[1]: Started sshd@34-10.0.0.125:22-10.0.0.1:56640.service - OpenSSH per-connection server daemon (10.0.0.1:56640). Mar 4 01:17:56.831540 systemd-logind[1443]: Removed session 34. Mar 4 01:17:56.880623 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 56640 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:56.881915 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:56.890229 systemd-logind[1443]: New session 35 of user core. Mar 4 01:17:56.905242 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 4 01:17:56.964770 kubelet[2503]: I0304 01:17:56.964659 2503 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1704a8ea-1822-4a66-9f90-ea5d365e8ad1" path="/var/lib/kubelet/pods/1704a8ea-1822-4a66-9f90-ea5d365e8ad1/volumes" Mar 4 01:17:56.965889 kubelet[2503]: I0304 01:17:56.965680 2503 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="580c0bac-6b83-42cc-830e-0548fc4ffa5e" path="/var/lib/kubelet/pods/580c0bac-6b83-42cc-830e-0548fc4ffa5e/volumes" Mar 4 01:17:57.599068 sshd[4485]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:57.617521 systemd[1]: sshd@34-10.0.0.125:22-10.0.0.1:56640.service: Deactivated successfully. Mar 4 01:17:57.620867 systemd[1]: session-35.scope: Deactivated successfully. Mar 4 01:17:57.624589 systemd-logind[1443]: Session 35 logged out. Waiting for processes to exit. Mar 4 01:17:57.631227 systemd[1]: Started sshd@35-10.0.0.125:22-10.0.0.1:56650.service - OpenSSH per-connection server daemon (10.0.0.1:56650). Mar 4 01:17:57.634906 systemd-logind[1443]: Removed session 35. Mar 4 01:17:57.655848 systemd[1]: Created slice kubepods-burstable-podb747825f_81a8_4b7c_a498_c471ea76c627.slice - libcontainer container kubepods-burstable-podb747825f_81a8_4b7c_a498_c471ea76c627.slice. Mar 4 01:17:57.674647 kubelet[2503]: I0304 01:17:57.674537 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-bpf-maps\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675156 kubelet[2503]: I0304 01:17:57.675137 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-xtables-lock\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675485 kubelet[2503]: I0304 01:17:57.675298 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-cilium-run\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675485 kubelet[2503]: I0304 01:17:57.675369 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-cni-path\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675485 kubelet[2503]: I0304 01:17:57.675393 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b747825f-81a8-4b7c-a498-c471ea76c627-cilium-config-path\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675485 kubelet[2503]: I0304 01:17:57.675417 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-cilium-cgroup\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675485 kubelet[2503]: I0304 01:17:57.675444 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-host-proc-sys-net\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675664 kubelet[2503]: I0304 01:17:57.675568 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-host-proc-sys-kernel\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675700 kubelet[2503]: I0304 01:17:57.675679 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-etc-cni-netd\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675738 kubelet[2503]: I0304 01:17:57.675706 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b747825f-81a8-4b7c-a498-c471ea76c627-cilium-ipsec-secrets\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675738 kubelet[2503]: I0304 01:17:57.675719 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh8k\" (UniqueName: \"kubernetes.io/projected/b747825f-81a8-4b7c-a498-c471ea76c627-kube-api-access-gvh8k\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675738 kubelet[2503]: I0304 01:17:57.675734 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-lib-modules\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675738 kubelet[2503]: I0304 01:17:57.675790 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b747825f-81a8-4b7c-a498-c471ea76c627-hostproc\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675738 kubelet[2503]: I0304 01:17:57.675804 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b747825f-81a8-4b7c-a498-c471ea76c627-clustermesh-secrets\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.675991 kubelet[2503]: I0304 01:17:57.675816 2503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b747825f-81a8-4b7c-a498-c471ea76c627-hubble-tls\") pod \"cilium-slllv\" (UID: \"b747825f-81a8-4b7c-a498-c471ea76c627\") " pod="kube-system/cilium-slllv" Mar 4 01:17:57.685109 sshd[4498]: Accepted publickey for core from 10.0.0.1 port 56650 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:57.691580 sshd[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:57.701180 systemd-logind[1443]: New session 36 of user core. Mar 4 01:17:57.706996 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 4 01:17:57.786660 sshd[4498]: pam_unix(sshd:session): session closed for user core Mar 4 01:17:57.813383 systemd[1]: sshd@35-10.0.0.125:22-10.0.0.1:56650.service: Deactivated successfully. Mar 4 01:17:57.815413 systemd[1]: session-36.scope: Deactivated successfully. Mar 4 01:17:57.816487 systemd-logind[1443]: Session 36 logged out. Waiting for processes to exit. Mar 4 01:17:57.836014 systemd[1]: Started sshd@36-10.0.0.125:22-10.0.0.1:56660.service - OpenSSH per-connection server daemon (10.0.0.1:56660). Mar 4 01:17:57.837907 systemd-logind[1443]: Removed session 36. Mar 4 01:17:57.886541 sshd[4510]: Accepted publickey for core from 10.0.0.1 port 56660 ssh2: RSA SHA256:KmpVXbxBd+OoeNcwbOBzU4oyxeALg+2FJUMSR0XUp7U Mar 4 01:17:57.892678 sshd[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 01:17:57.903800 systemd-logind[1443]: New session 37 of user core. Mar 4 01:17:57.917182 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 4 01:17:57.969561 kubelet[2503]: E0304 01:17:57.969429 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:57.971649 containerd[1451]: time="2026-03-04T01:17:57.971534737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slllv,Uid:b747825f-81a8-4b7c-a498-c471ea76c627,Namespace:kube-system,Attempt:0,}" Mar 4 01:17:58.018153 containerd[1451]: time="2026-03-04T01:17:58.017573076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 01:17:58.018153 containerd[1451]: time="2026-03-04T01:17:58.017668614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 01:17:58.018153 containerd[1451]: time="2026-03-04T01:17:58.017687399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:17:58.018153 containerd[1451]: time="2026-03-04T01:17:58.017903852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 01:17:58.073176 systemd[1]: Started cri-containerd-c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b.scope - libcontainer container c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b. Mar 4 01:17:58.158587 containerd[1451]: time="2026-03-04T01:17:58.157941673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slllv,Uid:b747825f-81a8-4b7c-a498-c471ea76c627,Namespace:kube-system,Attempt:0,} returns sandbox id \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\"" Mar 4 01:17:58.159897 kubelet[2503]: E0304 01:17:58.159829 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:58.174711 containerd[1451]: time="2026-03-04T01:17:58.174573700Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 01:17:58.226100 containerd[1451]: time="2026-03-04T01:17:58.225853431Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d\"" Mar 4 01:17:58.228347 containerd[1451]: time="2026-03-04T01:17:58.228200381Z" level=info msg="StartContainer for \"3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d\"" Mar 4 01:17:58.324949 systemd[1]: Started cri-containerd-3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d.scope - libcontainer container 3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d. Mar 4 01:17:58.407861 containerd[1451]: time="2026-03-04T01:17:58.407434294Z" level=info msg="StartContainer for \"3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d\" returns successfully" Mar 4 01:17:58.420245 systemd[1]: cri-containerd-3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d.scope: Deactivated successfully. Mar 4 01:17:58.472209 containerd[1451]: time="2026-03-04T01:17:58.472021726Z" level=info msg="shim disconnected" id=3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d namespace=k8s.io Mar 4 01:17:58.472209 containerd[1451]: time="2026-03-04T01:17:58.472110832Z" level=warning msg="cleaning up after shim disconnected" id=3ffc1514343abf599201b601167e06ee8ad0db7e2ce98185f4e999264476757d namespace=k8s.io Mar 4 01:17:58.472209 containerd[1451]: time="2026-03-04T01:17:58.472122964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:17:58.788253 kubelet[2503]: E0304 01:17:58.787300 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:58.799230 containerd[1451]: time="2026-03-04T01:17:58.799062487Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 01:17:58.836494 containerd[1451]: time="2026-03-04T01:17:58.836184666Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4\"" Mar 4 01:17:58.841643 containerd[1451]: time="2026-03-04T01:17:58.841520620Z" level=info msg="StartContainer for \"854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4\"" Mar 4 01:17:58.916948 systemd[1]: Started cri-containerd-854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4.scope - libcontainer container 854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4. Mar 4 01:17:58.998969 containerd[1451]: time="2026-03-04T01:17:58.998735545Z" level=info msg="StartContainer for \"854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4\" returns successfully" Mar 4 01:17:59.014722 systemd[1]: cri-containerd-854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4.scope: Deactivated successfully. Mar 4 01:17:59.074854 containerd[1451]: time="2026-03-04T01:17:59.073872069Z" level=info msg="shim disconnected" id=854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4 namespace=k8s.io Mar 4 01:17:59.074854 containerd[1451]: time="2026-03-04T01:17:59.074068835Z" level=warning msg="cleaning up after shim disconnected" id=854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4 namespace=k8s.io Mar 4 01:17:59.074854 containerd[1451]: time="2026-03-04T01:17:59.074078993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:17:59.094585 kubelet[2503]: E0304 01:17:59.094433 2503 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 4 01:17:59.793867 kubelet[2503]: E0304 01:17:59.793144 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:17:59.807303 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-854133e1ce8909b24afe28930d30cd9430398bb8032516266ae83e96f47fdad4-rootfs.mount: Deactivated successfully. Mar 4 01:17:59.808137 containerd[1451]: time="2026-03-04T01:17:59.808051461Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 01:17:59.842423 containerd[1451]: time="2026-03-04T01:17:59.842313968Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1\"" Mar 4 01:17:59.843275 containerd[1451]: time="2026-03-04T01:17:59.843256021Z" level=info msg="StartContainer for \"ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1\"" Mar 4 01:17:59.899384 systemd[1]: Started cri-containerd-ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1.scope - libcontainer container ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1. Mar 4 01:17:59.935721 containerd[1451]: time="2026-03-04T01:17:59.935607587Z" level=info msg="StartContainer for \"ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1\" returns successfully" Mar 4 01:17:59.939905 systemd[1]: cri-containerd-ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1.scope: Deactivated successfully. Mar 4 01:17:59.976791 containerd[1451]: time="2026-03-04T01:17:59.976560993Z" level=info msg="shim disconnected" id=ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1 namespace=k8s.io Mar 4 01:17:59.976791 containerd[1451]: time="2026-03-04T01:17:59.976621796Z" level=warning msg="cleaning up after shim disconnected" id=ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1 namespace=k8s.io Mar 4 01:17:59.976791 containerd[1451]: time="2026-03-04T01:17:59.976631244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:18:00.799620 kubelet[2503]: E0304 01:18:00.799541 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:00.807997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3effe6ce7079a82423d473631ee97ca3511daa1c34b06aa4b3c6c0b1d831b1-rootfs.mount: Deactivated successfully. Mar 4 01:18:00.808866 containerd[1451]: time="2026-03-04T01:18:00.808836740Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 01:18:00.828936 containerd[1451]: time="2026-03-04T01:18:00.828836600Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09\"" Mar 4 01:18:00.829565 containerd[1451]: time="2026-03-04T01:18:00.829516744Z" level=info msg="StartContainer for \"9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09\"" Mar 4 01:18:00.868557 systemd[1]: run-containerd-runc-k8s.io-9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09-runc.FlEc1P.mount: Deactivated successfully. Mar 4 01:18:00.881995 systemd[1]: Started cri-containerd-9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09.scope - libcontainer container 9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09. Mar 4 01:18:00.912028 systemd[1]: cri-containerd-9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09.scope: Deactivated successfully. Mar 4 01:18:00.915293 containerd[1451]: time="2026-03-04T01:18:00.915236986Z" level=info msg="StartContainer for \"9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09\" returns successfully" Mar 4 01:18:00.947513 containerd[1451]: time="2026-03-04T01:18:00.947425010Z" level=info msg="shim disconnected" id=9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09 namespace=k8s.io Mar 4 01:18:00.947513 containerd[1451]: time="2026-03-04T01:18:00.947504889Z" level=warning msg="cleaning up after shim disconnected" id=9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09 namespace=k8s.io Mar 4 01:18:00.947862 containerd[1451]: time="2026-03-04T01:18:00.947520398Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 01:18:01.674673 kubelet[2503]: I0304 01:18:01.674519 2503 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-04T01:18:01Z","lastTransitionTime":"2026-03-04T01:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 4 01:18:01.805637 kubelet[2503]: E0304 01:18:01.805531 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:01.809683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9193ae4d60e5db1edc07aca43a64a282579894e8504c9e4ca7402b09f5122e09-rootfs.mount: Deactivated successfully. Mar 4 01:18:01.816710 containerd[1451]: time="2026-03-04T01:18:01.816559144Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 01:18:01.838168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65323228.mount: Deactivated successfully. Mar 4 01:18:01.840670 containerd[1451]: time="2026-03-04T01:18:01.840562548Z" level=info msg="CreateContainer within sandbox \"c38e26972444e0af0b8996bc17c8e2a3643032d51d84a066cb89983f9cb0b48b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5a032358c1f9e492d1e3ddd026f91e5b7865defb85eb56839177995203cd89cf\"" Mar 4 01:18:01.841651 containerd[1451]: time="2026-03-04T01:18:01.841582628Z" level=info msg="StartContainer for \"5a032358c1f9e492d1e3ddd026f91e5b7865defb85eb56839177995203cd89cf\"" Mar 4 01:18:01.886079 systemd[1]: Started cri-containerd-5a032358c1f9e492d1e3ddd026f91e5b7865defb85eb56839177995203cd89cf.scope - libcontainer container 5a032358c1f9e492d1e3ddd026f91e5b7865defb85eb56839177995203cd89cf. Mar 4 01:18:01.927465 containerd[1451]: time="2026-03-04T01:18:01.927218443Z" level=info msg="StartContainer for \"5a032358c1f9e492d1e3ddd026f91e5b7865defb85eb56839177995203cd89cf\" returns successfully" Mar 4 01:18:02.678996 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 4 01:18:02.818995 kubelet[2503]: E0304 01:18:02.816020 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:02.855958 kubelet[2503]: I0304 01:18:02.854583 2503 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-slllv" podStartSLOduration=5.854559369 podStartE2EDuration="5.854559369s" podCreationTimestamp="2026-03-04 01:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 01:18:02.851566056 +0000 UTC m=+144.040256921" watchObservedRunningTime="2026-03-04 01:18:02.854559369 +0000 UTC m=+144.043250204" Mar 4 01:18:03.963466 kubelet[2503]: E0304 01:18:03.963321 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:06.675284 systemd-networkd[1388]: lxc_health: Link UP Mar 4 01:18:06.682495 systemd-networkd[1388]: lxc_health: Gained carrier Mar 4 01:18:07.970815 kubelet[2503]: E0304 01:18:07.967580 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:08.455714 systemd-networkd[1388]: lxc_health: Gained IPv6LL Mar 4 01:18:08.845902 kubelet[2503]: E0304 01:18:08.845846 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:09.853557 kubelet[2503]: E0304 01:18:09.850947 2503 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 4 01:18:13.656892 systemd[1]: run-containerd-runc-k8s.io-5a032358c1f9e492d1e3ddd026f91e5b7865defb85eb56839177995203cd89cf-runc.qQiXGW.mount: Deactivated successfully. Mar 4 01:18:13.741529 sshd[4510]: pam_unix(sshd:session): session closed for user core Mar 4 01:18:13.747647 systemd[1]: sshd@36-10.0.0.125:22-10.0.0.1:56660.service: Deactivated successfully. Mar 4 01:18:13.749853 systemd[1]: session-37.scope: Deactivated successfully. Mar 4 01:18:13.751942 systemd-logind[1443]: Session 37 logged out. Waiting for processes to exit. Mar 4 01:18:13.753842 systemd-logind[1443]: Removed session 37.