Mar 3 13:47:58.037791 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:47:58.037813 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:47:58.037865 kernel: BIOS-provided physical RAM map: Mar 3 13:47:58.037872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 3 13:47:58.037878 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 3 13:47:58.037884 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 3 13:47:58.037891 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 3 13:47:58.037897 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 3 13:47:58.037903 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 3 13:47:58.037909 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 3 13:47:58.037915 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:47:58.037923 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 3 13:47:58.037930 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:47:58.037936 kernel: NX (Execute Disable) protection: active Mar 3 13:47:58.037943 kernel: APIC: Static calls initialized Mar 3 13:47:58.037949 kernel: SMBIOS 2.8 present. Mar 3 13:47:58.037974 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 3 13:47:58.037981 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:47:58.037987 kernel: Hypervisor detected: KVM Mar 3 13:47:58.037994 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 3 13:47:58.038000 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:47:58.038006 kernel: kvm-clock: using sched offset of 6302443474 cycles Mar 3 13:47:58.038017 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:47:58.038029 kernel: tsc: Detected 2445.426 MHz processor Mar 3 13:47:58.038040 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:47:58.038052 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:47:58.038069 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 3 13:47:58.038081 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 3 13:47:58.038089 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:47:58.038101 kernel: Using GB pages for direct mapping Mar 3 13:47:58.038112 kernel: ACPI: Early table checksum verification disabled Mar 3 13:47:58.038124 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 3 13:47:58.038135 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038147 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038156 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038171 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 3 13:47:58.038183 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038195 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038206 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038213 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:47:58.038227 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 3 13:47:58.038243 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 3 13:47:58.038255 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 3 13:47:58.038267 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 3 13:47:58.038305 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 3 13:47:58.038312 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 3 13:47:58.038319 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 3 13:47:58.038326 kernel: No NUMA configuration found Mar 3 13:47:58.038333 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 3 13:47:58.038350 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 3 13:47:58.038362 kernel: Zone ranges: Mar 3 13:47:58.038374 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:47:58.038386 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 3 13:47:58.038398 kernel: Normal empty Mar 3 13:47:58.038439 kernel: Device empty Mar 3 13:47:58.038452 kernel: Movable zone start for each node Mar 3 13:47:58.038463 kernel: Early memory node ranges Mar 3 13:47:58.038470 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 3 13:47:58.038477 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 3 13:47:58.038488 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 3 13:47:58.038498 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:47:58.038510 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 3 13:47:58.038549 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 3 13:47:58.038563 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 3 13:47:58.038572 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:47:58.038579 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 3 13:47:58.038586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 3 13:47:58.038594 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:47:58.038613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:47:58.038625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:47:58.038637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:47:58.038648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:47:58.038660 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:47:58.038671 kernel: TSC deadline timer available Mar 3 13:47:58.038683 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:47:58.038695 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:47:58.038706 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:47:58.038722 kernel: CPU topo: Max. threads per core: 1 Mar 3 13:47:58.038734 kernel: CPU topo: Num. cores per package: 4 Mar 3 13:47:58.038745 kernel: CPU topo: Num. threads per package: 4 Mar 3 13:47:58.038756 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 3 13:47:58.038768 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:47:58.038779 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 3 13:47:58.038792 kernel: kvm-guest: setup PV sched yield Mar 3 13:47:58.038804 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 3 13:47:58.038815 kernel: Booting paravirtualized kernel on KVM Mar 3 13:47:58.038887 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:47:58.038905 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 3 13:47:58.038918 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 3 13:47:58.038930 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 3 13:47:58.038942 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 3 13:47:58.038953 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:47:58.038964 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:47:58.039004 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:47:58.039042 kernel: random: crng init done Mar 3 13:47:58.039060 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 13:47:58.039072 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:47:58.039084 kernel: Fallback order for Node 0: 0 Mar 3 13:47:58.039096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 3 13:47:58.039107 kernel: Policy zone: DMA32 Mar 3 13:47:58.039119 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:47:58.039131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 3 13:47:58.039143 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:47:58.039155 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:47:58.039172 kernel: Dynamic Preempt: voluntary Mar 3 13:47:58.039184 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:47:58.039198 kernel: rcu: RCU event tracing is enabled. Mar 3 13:47:58.039210 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 3 13:47:58.039223 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:47:58.039261 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:47:58.039303 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:47:58.039317 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:47:58.039329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 3 13:47:58.039347 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:47:58.039360 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:47:58.039372 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:47:58.039383 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 3 13:47:58.039396 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:47:58.039422 kernel: Console: colour VGA+ 80x25 Mar 3 13:47:58.039438 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:47:58.039451 kernel: ACPI: Core revision 20240827 Mar 3 13:47:58.039463 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 3 13:47:58.039475 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:47:58.039487 kernel: x2apic enabled Mar 3 13:47:58.039500 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:47:58.039517 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 3 13:47:58.039530 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 3 13:47:58.039541 kernel: kvm-guest: setup PV IPIs Mar 3 13:47:58.039554 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 3 13:47:58.039568 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:47:58.039585 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 3 13:47:58.039598 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 3 13:47:58.039611 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 3 13:47:58.039623 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 3 13:47:58.039636 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:47:58.039648 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:47:58.039661 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:47:58.039673 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:47:58.039686 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 3 13:47:58.039705 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 3 13:47:58.039719 kernel: active return thunk: srso_alias_return_thunk Mar 3 13:47:58.039731 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 3 13:47:58.039743 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 3 13:47:58.039756 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:47:58.039768 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:47:58.039780 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:47:58.039793 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:47:58.039811 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:47:58.039914 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 3 13:47:58.039929 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:47:58.039942 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:47:58.039954 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:47:58.039966 kernel: landlock: Up and running. Mar 3 13:47:58.039979 kernel: SELinux: Initializing. Mar 3 13:47:58.039991 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:47:58.040003 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:47:58.040048 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 3 13:47:58.040063 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 3 13:47:58.040075 kernel: signal: max sigframe size: 1776 Mar 3 13:47:58.040087 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:47:58.040100 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:47:58.040113 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:47:58.040125 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:47:58.040137 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:47:58.040150 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:47:58.040168 kernel: .... node #0, CPUs: #1 #2 #3 Mar 3 13:47:58.040179 kernel: smp: Brought up 1 node, 4 CPUs Mar 3 13:47:58.040192 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 3 13:47:58.040205 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 3 13:47:58.040218 kernel: devtmpfs: initialized Mar 3 13:47:58.040230 kernel: x86/mm: Memory block size: 128MB Mar 3 13:47:58.040242 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:47:58.040255 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 3 13:47:58.040267 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:47:58.040318 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:47:58.040327 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:47:58.040334 kernel: audit: type=2000 audit(1772545674.248:1): state=initialized audit_enabled=0 res=1 Mar 3 13:47:58.040341 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:47:58.040348 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:47:58.040355 kernel: cpuidle: using governor menu Mar 3 13:47:58.040362 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:47:58.040369 kernel: dca service started, version 1.12.1 Mar 3 13:47:58.040376 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 3 13:47:58.040388 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 3 13:47:58.040395 kernel: PCI: Using configuration type 1 for base access Mar 3 13:47:58.040402 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:47:58.040409 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:47:58.040417 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:47:58.040427 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:47:58.040440 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:47:58.040453 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:47:58.040465 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:47:58.040482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:47:58.040494 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 13:47:58.040506 kernel: ACPI: Interpreter enabled Mar 3 13:47:58.040519 kernel: ACPI: PM: (supports S0 S3 S5) Mar 3 13:47:58.040531 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:47:58.040543 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:47:58.040555 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:47:58.040568 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 3 13:47:58.040580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:47:58.041038 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:47:58.041266 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 3 13:47:58.041554 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 3 13:47:58.041574 kernel: PCI host bridge to bus 0000:00 Mar 3 13:47:58.041791 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:47:58.042089 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:47:58.042380 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:47:58.042588 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 3 13:47:58.042792 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 3 13:47:58.043069 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 3 13:47:58.043271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:47:58.043563 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:47:58.043923 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:47:58.044240 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 3 13:47:58.044498 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 3 13:47:58.044645 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 3 13:47:58.044786 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:47:58.045007 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 3 13:47:58.045152 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 3 13:47:58.045334 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 3 13:47:58.045478 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 3 13:47:58.045627 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 3 13:47:58.045768 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 3 13:47:58.046031 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 3 13:47:58.046178 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 3 13:47:58.046370 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 3 13:47:58.046520 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 3 13:47:58.046660 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 3 13:47:58.046798 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 3 13:47:58.047072 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 3 13:47:58.047331 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:47:58.047546 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 3 13:47:58.047779 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 3 13:47:58.048084 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 3 13:47:58.048361 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 3 13:47:58.048599 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 3 13:47:58.048877 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 3 13:47:58.048900 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:47:58.048913 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:47:58.048927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:47:58.048946 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:47:58.048959 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 3 13:47:58.048970 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 3 13:47:58.048983 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 3 13:47:58.048995 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 3 13:47:58.049007 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 3 13:47:58.049019 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 3 13:47:58.049031 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 3 13:47:58.049043 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 3 13:47:58.049060 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 3 13:47:58.049074 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 3 13:47:58.049086 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 3 13:47:58.049098 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 3 13:47:58.049111 kernel: iommu: Default domain type: Translated Mar 3 13:47:58.049123 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:47:58.049135 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:47:58.049147 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:47:58.049160 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 3 13:47:58.049172 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 3 13:47:58.049397 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 3 13:47:58.049619 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 3 13:47:58.049907 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:47:58.049927 kernel: vgaarb: loaded Mar 3 13:47:58.049940 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 3 13:47:58.049953 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 3 13:47:58.049965 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:47:58.049984 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:47:58.049997 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:47:58.050009 kernel: pnp: PnP ACPI init Mar 3 13:47:58.050271 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 3 13:47:58.050330 kernel: pnp: PnP ACPI: found 6 devices Mar 3 13:47:58.050344 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:47:58.050357 kernel: NET: Registered PF_INET protocol family Mar 3 13:47:58.050369 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 13:47:58.050382 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 13:47:58.050400 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:47:58.050414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:47:58.050426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 13:47:58.050439 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 13:47:58.050451 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:47:58.050463 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:47:58.050475 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:47:58.050488 kernel: NET: Registered PF_XDP protocol family Mar 3 13:47:58.050705 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:47:58.051014 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:47:58.051219 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:47:58.051465 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 3 13:47:58.051663 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 3 13:47:58.051927 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 3 13:47:58.051948 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:47:58.051961 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:47:58.051974 kernel: Initialise system trusted keyrings Mar 3 13:47:58.051994 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 13:47:58.052008 kernel: Key type asymmetric registered Mar 3 13:47:58.052020 kernel: Asymmetric key parser 'x509' registered Mar 3 13:47:58.052032 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:47:58.052045 kernel: io scheduler mq-deadline registered Mar 3 13:47:58.052057 kernel: io scheduler kyber registered Mar 3 13:47:58.052069 kernel: io scheduler bfq registered Mar 3 13:47:58.052083 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:47:58.052095 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 3 13:47:58.052113 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 3 13:47:58.052126 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 3 13:47:58.052139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:47:58.052151 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:47:58.052164 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:47:58.052176 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:47:58.052189 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:47:58.052532 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 3 13:47:58.052560 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:47:58.052760 kernel: rtc_cmos 00:04: registered as rtc0 Mar 3 13:47:58.053031 kernel: rtc_cmos 00:04: setting system clock to 2026-03-03T13:47:57 UTC (1772545677) Mar 3 13:47:58.053175 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 3 13:47:58.053187 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 3 13:47:58.053195 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:47:58.053203 kernel: Segment Routing with IPv6 Mar 3 13:47:58.053210 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:47:58.053218 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:47:58.053231 kernel: Key type dns_resolver registered Mar 3 13:47:58.053238 kernel: IPI shorthand broadcast: enabled Mar 3 13:47:58.053245 kernel: sched_clock: Marking stable (3278023102, 372461999)->(3846101997, -195616896) Mar 3 13:47:58.053253 kernel: registered taskstats version 1 Mar 3 13:47:58.053260 kernel: Loading compiled-in X.509 certificates Mar 3 13:47:58.053267 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:47:58.053304 kernel: Demotion targets for Node 0: null Mar 3 13:47:58.053312 kernel: Key type .fscrypt registered Mar 3 13:47:58.053319 kernel: Key type fscrypt-provisioning registered Mar 3 13:47:58.053330 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:47:58.053337 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:47:58.053345 kernel: ima: No architecture policies found Mar 3 13:47:58.053351 kernel: clk: Disabling unused clocks Mar 3 13:47:58.053359 kernel: Warning: unable to open an initial console. Mar 3 13:47:58.053366 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:47:58.053374 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:47:58.053381 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:47:58.053390 kernel: Run /init as init process Mar 3 13:47:58.053398 kernel: with arguments: Mar 3 13:47:58.053405 kernel: /init Mar 3 13:47:58.053412 kernel: with environment: Mar 3 13:47:58.053419 kernel: HOME=/ Mar 3 13:47:58.053426 kernel: TERM=linux Mar 3 13:47:58.053434 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:47:58.053444 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:47:58.053455 systemd[1]: Detected virtualization kvm. Mar 3 13:47:58.053462 systemd[1]: Detected architecture x86-64. Mar 3 13:47:58.053470 systemd[1]: Running in initrd. Mar 3 13:47:58.053477 systemd[1]: No hostname configured, using default hostname. Mar 3 13:47:58.053485 systemd[1]: Hostname set to . Mar 3 13:47:58.053492 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:47:58.053500 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:47:58.053508 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:47:58.053529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:47:58.053540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:47:58.053551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:47:58.053559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:47:58.053567 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:47:58.053579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:47:58.053587 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:47:58.053595 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:47:58.053603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:47:58.053611 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:47:58.053618 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:47:58.053626 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:47:58.053634 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:47:58.053644 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:47:58.053652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:47:58.053659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:47:58.053667 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:47:58.053675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:47:58.053683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:47:58.053690 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:47:58.053698 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:47:58.053706 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:47:58.053716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:47:58.053724 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:47:58.053732 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:47:58.053740 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:47:58.053747 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:47:58.053755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:47:58.053763 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:47:58.053771 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:47:58.053784 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:47:58.053794 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:47:58.053885 systemd-journald[201]: Collecting audit messages is disabled. Mar 3 13:47:58.053908 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:47:58.053917 systemd-journald[201]: Journal started Mar 3 13:47:58.053938 systemd-journald[201]: Runtime Journal (/run/log/journal/23ee60ea95f144e7ae263097e2411c68) is 6M, max 48.3M, 42.2M free. Mar 3 13:47:58.037253 systemd-modules-load[204]: Inserted module 'overlay' Mar 3 13:47:58.059737 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:47:58.079929 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:47:58.081660 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 3 13:47:58.225202 kernel: Bridge firewalling registered Mar 3 13:47:58.082328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:47:58.238023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:47:58.238410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:47:58.247049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:47:58.253103 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:47:58.257985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:47:58.274420 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:47:58.275141 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:47:58.281443 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:47:58.283044 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:47:58.285970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:47:58.312962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:47:58.323349 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:47:58.332686 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:47:58.347383 systemd-resolved[232]: Positive Trust Anchors: Mar 3 13:47:58.347416 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:47:58.347460 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:47:58.351504 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 3 13:47:58.353933 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:47:58.382262 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:47:58.355436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:47:58.503897 kernel: SCSI subsystem initialized Mar 3 13:47:58.513883 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:47:58.524887 kernel: iscsi: registered transport (tcp) Mar 3 13:47:58.545963 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:47:58.546017 kernel: QLogic iSCSI HBA Driver Mar 3 13:47:58.571433 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:47:58.599657 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:47:58.607714 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:47:58.669487 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:47:58.671234 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:47:58.736897 kernel: raid6: avx2x4 gen() 31721 MB/s Mar 3 13:47:58.754893 kernel: raid6: avx2x2 gen() 31064 MB/s Mar 3 13:47:58.774382 kernel: raid6: avx2x1 gen() 21667 MB/s Mar 3 13:47:58.774448 kernel: raid6: using algorithm avx2x4 gen() 31721 MB/s Mar 3 13:47:58.793659 kernel: raid6: .... xor() 4879 MB/s, rmw enabled Mar 3 13:47:58.793700 kernel: raid6: using avx2x2 recovery algorithm Mar 3 13:47:58.813898 kernel: xor: automatically using best checksumming function avx Mar 3 13:47:58.965949 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:47:58.976012 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:47:58.980927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:47:59.012586 systemd-udevd[455]: Using default interface naming scheme 'v255'. Mar 3 13:47:59.019261 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:47:59.020539 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:47:59.054811 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Mar 3 13:47:59.093958 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:47:59.100813 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:47:59.210989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:47:59.219775 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:47:59.266902 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 3 13:47:59.281884 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:47:59.303562 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 3 13:47:59.310365 kernel: libata version 3.00 loaded. Mar 3 13:47:59.309138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:47:59.326932 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:47:59.326950 kernel: GPT:9289727 != 19775487 Mar 3 13:47:59.326961 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:47:59.326972 kernel: GPT:9289727 != 19775487 Mar 3 13:47:59.326982 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:47:59.326992 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:47:59.309261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:47:59.327148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:47:59.338583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:47:59.356996 kernel: AES CTR mode by8 optimization enabled Mar 3 13:47:59.357024 kernel: ahci 0000:00:1f.2: version 3.0 Mar 3 13:47:59.357324 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 3 13:47:59.357347 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 3 13:47:59.347248 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:47:59.377397 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 3 13:47:59.380143 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 3 13:47:59.380365 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 3 13:47:59.394608 kernel: scsi host0: ahci Mar 3 13:47:59.394880 kernel: scsi host1: ahci Mar 3 13:47:59.396859 kernel: scsi host2: ahci Mar 3 13:47:59.398867 kernel: scsi host3: ahci Mar 3 13:47:59.400874 kernel: scsi host4: ahci Mar 3 13:47:59.408887 kernel: scsi host5: ahci Mar 3 13:47:59.409091 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 3 13:47:59.409104 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 3 13:47:59.409114 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 3 13:47:59.412715 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 3 13:47:59.412741 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 3 13:47:59.412753 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 3 13:47:59.413727 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 3 13:47:59.561763 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 3 13:47:59.569525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:47:59.586439 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 3 13:47:59.586631 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 3 13:47:59.603099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:47:59.607432 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:47:59.637193 disk-uuid[619]: Primary Header is updated. Mar 3 13:47:59.637193 disk-uuid[619]: Secondary Entries is updated. Mar 3 13:47:59.637193 disk-uuid[619]: Secondary Header is updated. Mar 3 13:47:59.644944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:47:59.651904 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:47:59.726881 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 3 13:47:59.726932 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 3 13:47:59.728900 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 3 13:47:59.730886 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 3 13:47:59.732881 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 3 13:47:59.735386 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:47:59.737279 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 3 13:47:59.737359 kernel: ata3.00: applying bridge limits Mar 3 13:47:59.738877 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 3 13:47:59.740898 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:47:59.743528 kernel: ata3.00: configured for UDMA/100 Mar 3 13:47:59.746876 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 3 13:47:59.812082 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 3 13:47:59.812477 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 3 13:47:59.833878 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 3 13:48:00.264697 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:48:00.267881 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:48:00.272933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:48:00.275773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:48:00.276778 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:48:00.305587 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:48:00.654872 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:48:00.655972 disk-uuid[620]: The operation has completed successfully. Mar 3 13:48:00.696954 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:48:00.697158 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:48:00.732757 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:48:00.762706 sh[649]: Success Mar 3 13:48:00.787691 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:48:00.787730 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:48:00.790905 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:48:00.804887 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 3 13:48:00.845402 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:48:00.854927 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:48:00.881161 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:48:00.891874 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (661) Mar 3 13:48:00.901277 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:48:00.901342 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:48:00.911678 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:48:00.911721 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:48:00.913647 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:48:00.914331 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:48:00.918700 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:48:00.919741 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:48:00.928262 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:48:00.972904 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (694) Mar 3 13:48:00.972953 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:48:00.979936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:48:00.987866 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:48:00.987984 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:48:00.997872 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:48:01.000249 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:48:01.004812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:48:01.103015 ignition[746]: Ignition 2.22.0 Mar 3 13:48:01.103043 ignition[746]: Stage: fetch-offline Mar 3 13:48:01.103081 ignition[746]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:48:01.103092 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:48:01.103169 ignition[746]: parsed url from cmdline: "" Mar 3 13:48:01.103173 ignition[746]: no config URL provided Mar 3 13:48:01.103178 ignition[746]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:48:01.103188 ignition[746]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:48:01.103211 ignition[746]: op(1): [started] loading QEMU firmware config module Mar 3 13:48:01.103216 ignition[746]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 3 13:48:01.113046 ignition[746]: op(1): [finished] loading QEMU firmware config module Mar 3 13:48:01.113064 ignition[746]: QEMU firmware config was not found. Ignoring... Mar 3 13:48:01.135335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:48:01.137405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:48:01.192313 systemd-networkd[839]: lo: Link UP Mar 3 13:48:01.192332 systemd-networkd[839]: lo: Gained carrier Mar 3 13:48:01.194460 systemd-networkd[839]: Enumeration completed Mar 3 13:48:01.194568 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:48:01.195158 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:48:01.195163 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:48:01.196336 systemd[1]: Reached target network.target - Network. Mar 3 13:48:01.196897 systemd-networkd[839]: eth0: Link UP Mar 3 13:48:01.197089 systemd-networkd[839]: eth0: Gained carrier Mar 3 13:48:01.197099 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:48:01.229879 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:48:01.320470 ignition[746]: parsing config with SHA512: 8079e23ab1a1b24e002bfb1e1897c219fcfc4853a26da563c225e8da48cc690479a24e92e0a688fffdec4bed881b720861173df44a181a55223777f014ee2630 Mar 3 13:48:01.325686 unknown[746]: fetched base config from "system" Mar 3 13:48:01.325712 unknown[746]: fetched user config from "qemu" Mar 3 13:48:01.326087 ignition[746]: fetch-offline: fetch-offline passed Mar 3 13:48:01.326873 systemd-resolved[232]: Detected conflict on linux IN A 10.0.0.100 Mar 3 13:48:01.326173 ignition[746]: Ignition finished successfully Mar 3 13:48:01.326886 systemd-resolved[232]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Mar 3 13:48:01.329062 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:48:01.333626 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 3 13:48:01.334932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:48:01.383927 ignition[844]: Ignition 2.22.0 Mar 3 13:48:01.383950 ignition[844]: Stage: kargs Mar 3 13:48:01.384070 ignition[844]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:48:01.384081 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:48:01.390191 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:48:01.384672 ignition[844]: kargs: kargs passed Mar 3 13:48:01.393394 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:48:01.384713 ignition[844]: Ignition finished successfully Mar 3 13:48:01.441324 ignition[852]: Ignition 2.22.0 Mar 3 13:48:01.441351 ignition[852]: Stage: disks Mar 3 13:48:01.441503 ignition[852]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:48:01.441516 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:48:01.442176 ignition[852]: disks: disks passed Mar 3 13:48:01.442224 ignition[852]: Ignition finished successfully Mar 3 13:48:01.449866 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:48:01.455606 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:48:01.457609 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:48:01.469396 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:48:01.469519 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:48:01.480594 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:48:01.484974 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:48:01.518386 systemd-fsck[862]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:48:01.524640 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:48:01.533389 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:48:01.668907 kernel: EXT4-fs (vda9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:48:01.670241 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:48:01.671095 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:48:01.680208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:48:01.683496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:48:01.684907 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:48:01.684970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:48:01.685004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:48:01.714105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:48:01.719478 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:48:01.727477 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Mar 3 13:48:01.727513 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:48:01.727531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:48:01.736647 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:48:01.736679 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:48:01.741252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:48:01.796060 initrd-setup-root[894]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:48:01.804335 initrd-setup-root[901]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:48:01.811181 initrd-setup-root[908]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:48:01.818597 initrd-setup-root[915]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:48:01.941916 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:48:01.949186 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:48:01.954973 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:48:01.972519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:48:01.976769 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:48:01.994077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:48:02.016236 ignition[983]: INFO : Ignition 2.22.0 Mar 3 13:48:02.016236 ignition[983]: INFO : Stage: mount Mar 3 13:48:02.020991 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:48:02.020991 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:48:02.020991 ignition[983]: INFO : mount: mount passed Mar 3 13:48:02.020991 ignition[983]: INFO : Ignition finished successfully Mar 3 13:48:02.022271 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:48:02.031917 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:48:02.071608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:48:02.108899 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Mar 3 13:48:02.108963 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:48:02.114861 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:48:02.121952 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:48:02.121987 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:48:02.124767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:48:02.161215 ignition[1014]: INFO : Ignition 2.22.0 Mar 3 13:48:02.161215 ignition[1014]: INFO : Stage: files Mar 3 13:48:02.167505 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:48:02.167505 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:48:02.167505 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:48:02.167505 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:48:02.167505 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:48:02.167505 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:48:02.167505 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:48:02.191703 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:48:02.191703 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:48:02.191703 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:48:02.167536 unknown[1014]: wrote ssh authorized keys file for user: core Mar 3 13:48:02.244956 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:48:02.343411 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:48:02.343411 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 13:48:02.354431 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 3 13:48:02.518458 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 3 13:48:02.573698 systemd-networkd[839]: eth0: Gained IPv6LL Mar 3 13:48:02.621611 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 3 13:48:02.621611 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:48:02.633160 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 3 13:48:02.895207 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 3 13:48:03.291129 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:48:03.291129 ignition[1014]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 3 13:48:03.315132 ignition[1014]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:48:03.322155 ignition[1014]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:48:03.322155 ignition[1014]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 3 13:48:03.322155 ignition[1014]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 3 13:48:03.322155 ignition[1014]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:48:03.342278 ignition[1014]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:48:03.342278 ignition[1014]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 3 13:48:03.342278 ignition[1014]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 3 13:48:03.376349 ignition[1014]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:48:03.419656 ignition[1014]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:48:03.419656 ignition[1014]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 3 13:48:03.419656 ignition[1014]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:48:03.457551 ignition[1014]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:48:03.457551 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:48:03.457551 ignition[1014]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:48:03.457551 ignition[1014]: INFO : files: files passed Mar 3 13:48:03.457551 ignition[1014]: INFO : Ignition finished successfully Mar 3 13:48:03.458454 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:48:03.474680 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:48:03.500384 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:48:03.532427 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:48:03.532638 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:48:03.566885 initrd-setup-root-after-ignition[1042]: grep: /sysroot/oem/oem-release: No such file or directory Mar 3 13:48:03.576178 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:48:03.576178 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:48:03.607148 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:48:03.608126 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:48:03.614278 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:48:03.622427 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:48:03.726210 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:48:03.726475 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:48:03.734115 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:48:03.742337 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:48:03.759146 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:48:03.760876 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:48:03.828072 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:48:03.831072 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:48:03.878222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:48:03.881928 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:48:03.889166 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:48:03.892682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:48:03.893062 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:48:03.906633 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:48:03.909812 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:48:03.912973 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:48:03.917699 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:48:03.924705 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:48:03.933060 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:48:03.945125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:48:03.945440 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:48:03.959171 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:48:03.962382 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:48:03.981622 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:48:03.981952 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:48:03.982272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:48:03.995129 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:48:03.995455 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:48:04.001060 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:48:04.001390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:48:04.007187 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:48:04.007438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:48:04.021128 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:48:04.021360 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:48:04.029573 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:48:04.037918 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:48:04.038392 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:48:04.040277 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:48:04.049511 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:48:04.053020 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:48:04.053151 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:48:04.062010 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:48:04.062148 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:48:04.065687 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:48:04.066208 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:48:04.075749 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:48:04.075986 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:48:04.082774 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:48:04.089023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:48:04.097397 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:48:04.097602 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:48:04.106341 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:48:04.106466 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:48:04.125578 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:48:04.125793 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:48:04.138935 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:48:04.141991 ignition[1069]: INFO : Ignition 2.22.0 Mar 3 13:48:04.141991 ignition[1069]: INFO : Stage: umount Mar 3 13:48:04.141991 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:48:04.141991 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:48:04.141991 ignition[1069]: INFO : umount: umount passed Mar 3 13:48:04.141991 ignition[1069]: INFO : Ignition finished successfully Mar 3 13:48:04.142676 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:48:04.142812 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:48:04.146965 systemd[1]: Stopped target network.target - Network. Mar 3 13:48:04.156957 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:48:04.157093 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:48:04.159260 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:48:04.159380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:48:04.160493 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:48:04.160563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:48:04.161587 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:48:04.161650 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:48:04.163122 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:48:04.163773 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:48:04.185482 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:48:04.185660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:48:04.194903 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:48:04.195263 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:48:04.195377 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:48:04.207349 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:48:04.207925 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:48:04.208133 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:48:04.216208 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:48:04.216932 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:48:04.218620 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:48:04.218689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:48:04.280731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:48:04.280954 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:48:04.281033 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:48:04.285674 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:48:04.285730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:48:04.304273 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:48:04.304386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:48:04.312135 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:48:04.315630 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:48:04.316132 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:48:04.316278 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:48:04.331292 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:48:04.331455 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:48:04.350469 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:48:04.350658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:48:04.366522 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:48:04.366904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:48:04.375005 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:48:04.375073 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:48:04.378545 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:48:04.378598 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:48:04.390032 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:48:04.390129 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:48:04.405124 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:48:04.405205 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:48:04.415159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:48:04.415249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:48:04.427164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:48:04.430003 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:48:04.430072 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:48:04.451439 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:48:04.451537 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:48:04.466755 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 3 13:48:04.466996 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:48:04.481559 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:48:04.481650 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:48:04.490128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:48:04.490187 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:48:04.503035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:48:04.503182 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:48:04.511559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:48:04.516437 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:48:04.549927 systemd[1]: Switching root. Mar 3 13:48:04.598993 systemd-journald[201]: Journal stopped Mar 3 13:48:06.187554 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Mar 3 13:48:06.187620 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:48:06.187642 kernel: SELinux: policy capability open_perms=1 Mar 3 13:48:06.187653 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:48:06.187665 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:48:06.187681 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:48:06.187692 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:48:06.187703 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:48:06.187716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:48:06.187730 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:48:06.187744 kernel: audit: type=1403 audit(1772545684.819:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:48:06.187761 systemd[1]: Successfully loaded SELinux policy in 79.500ms. Mar 3 13:48:06.187780 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.512ms. Mar 3 13:48:06.187792 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:48:06.187804 systemd[1]: Detected virtualization kvm. Mar 3 13:48:06.187816 systemd[1]: Detected architecture x86-64. Mar 3 13:48:06.187866 systemd[1]: Detected first boot. Mar 3 13:48:06.187878 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:48:06.187890 zram_generator::config[1114]: No configuration found. Mar 3 13:48:06.187906 kernel: Guest personality initialized and is inactive Mar 3 13:48:06.187918 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:48:06.187929 kernel: Initialized host personality Mar 3 13:48:06.187940 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:48:06.187952 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:48:06.187964 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:48:06.187976 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:48:06.187988 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:48:06.187999 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:48:06.188014 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:48:06.188027 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:48:06.188039 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:48:06.188050 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:48:06.188062 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:48:06.188074 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:48:06.188086 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:48:06.188097 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:48:06.188113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:48:06.188124 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:48:06.188136 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:48:06.188148 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:48:06.188160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:48:06.188172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:48:06.188183 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:48:06.188195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:48:06.188210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:48:06.188221 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:48:06.188233 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:48:06.188250 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:48:06.188262 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:48:06.188274 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:48:06.188287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:48:06.188298 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:48:06.188339 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:48:06.188356 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:48:06.188367 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:48:06.188379 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:48:06.188391 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:48:06.188402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:48:06.188414 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:48:06.188426 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:48:06.188437 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:48:06.188449 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:48:06.188463 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:48:06.188475 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:48:06.188487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:48:06.188498 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:48:06.188510 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:48:06.188522 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:48:06.188534 systemd[1]: Reached target machines.target - Containers. Mar 3 13:48:06.188548 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:48:06.188562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:48:06.188575 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:48:06.188587 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:48:06.188598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:48:06.188610 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:48:06.188623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:48:06.188643 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:48:06.188661 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:48:06.188677 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:48:06.188692 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:48:06.188704 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:48:06.188715 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:48:06.188727 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:48:06.188739 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:48:06.188751 kernel: fuse: init (API version 7.41) Mar 3 13:48:06.188762 kernel: loop: module loaded Mar 3 13:48:06.188773 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:48:06.188787 kernel: ACPI: bus type drm_connector registered Mar 3 13:48:06.188798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:48:06.188810 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:48:06.188897 systemd-journald[1199]: Collecting audit messages is disabled. Mar 3 13:48:06.188924 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:48:06.188939 systemd-journald[1199]: Journal started Mar 3 13:48:06.188960 systemd-journald[1199]: Runtime Journal (/run/log/journal/23ee60ea95f144e7ae263097e2411c68) is 6M, max 48.3M, 42.2M free. Mar 3 13:48:05.690180 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:48:05.718741 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 3 13:48:05.719453 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:48:06.199967 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:48:06.205421 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:48:06.211295 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:48:06.211368 systemd[1]: Stopped verity-setup.service. Mar 3 13:48:06.218879 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:48:06.223877 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:48:06.227210 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:48:06.230076 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:48:06.233178 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:48:06.236020 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:48:06.239647 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:48:06.243601 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:48:06.247090 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:48:06.252285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:48:06.256660 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:48:06.257008 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:48:06.261217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:48:06.261550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:48:06.270016 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:48:06.270359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:48:06.273703 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:48:06.274064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:48:06.277666 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:48:06.278049 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:48:06.282172 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:48:06.282526 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:48:06.286202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:48:06.289560 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:48:06.293382 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:48:06.297348 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:48:06.315635 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:48:06.320483 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:48:06.325761 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:48:06.329055 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:48:06.329205 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:48:06.333650 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:48:06.341998 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:48:06.344728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:48:06.347474 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:48:06.352013 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:48:06.356510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:48:06.358685 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:48:06.361535 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:48:06.363579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:48:06.370572 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:48:06.377508 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:48:06.380770 systemd-journald[1199]: Time spent on flushing to /var/log/journal/23ee60ea95f144e7ae263097e2411c68 is 17ms for 980 entries. Mar 3 13:48:06.380770 systemd-journald[1199]: System Journal (/var/log/journal/23ee60ea95f144e7ae263097e2411c68) is 8M, max 195.6M, 187.6M free. Mar 3 13:48:06.407637 systemd-journald[1199]: Received client request to flush runtime journal. Mar 3 13:48:06.385645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:48:06.392570 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:48:06.395896 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:48:06.401001 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:48:06.409736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:48:06.418605 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:48:06.423279 kernel: loop0: detected capacity change from 0 to 219192 Mar 3 13:48:06.426459 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:48:06.433141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:48:06.455364 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 3 13:48:06.456080 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Mar 3 13:48:06.457882 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:48:06.465216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:48:06.475014 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:48:06.479499 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:48:06.480446 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:48:06.497874 kernel: loop1: detected capacity change from 0 to 110984 Mar 3 13:48:06.532221 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:48:06.544894 kernel: loop2: detected capacity change from 0 to 128560 Mar 3 13:48:06.537865 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:48:06.565246 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 3 13:48:06.565412 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 3 13:48:06.571609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:48:06.588910 kernel: loop3: detected capacity change from 0 to 219192 Mar 3 13:48:06.605095 kernel: loop4: detected capacity change from 0 to 110984 Mar 3 13:48:06.621886 kernel: loop5: detected capacity change from 0 to 128560 Mar 3 13:48:06.637182 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 3 13:48:06.638071 (sd-merge)[1261]: Merged extensions into '/usr'. Mar 3 13:48:06.645435 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:48:06.645471 systemd[1]: Reloading... Mar 3 13:48:06.722938 zram_generator::config[1286]: No configuration found. Mar 3 13:48:06.779872 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:48:06.954103 systemd[1]: Reloading finished in 307 ms. Mar 3 13:48:06.999643 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:48:07.003259 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:48:07.007219 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:48:07.037595 systemd[1]: Starting ensure-sysext.service... Mar 3 13:48:07.042492 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:48:07.048622 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:48:07.065910 systemd[1]: Reload requested from client PID 1327 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:48:07.065947 systemd[1]: Reloading... Mar 3 13:48:07.067769 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:48:07.067894 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:48:07.068302 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:48:07.068623 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:48:07.069925 systemd-tmpfiles[1328]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:48:07.070203 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Mar 3 13:48:07.070289 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Mar 3 13:48:07.077749 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:48:07.077776 systemd-tmpfiles[1328]: Skipping /boot Mar 3 13:48:07.084777 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Mar 3 13:48:07.092807 systemd-tmpfiles[1328]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:48:07.092899 systemd-tmpfiles[1328]: Skipping /boot Mar 3 13:48:07.135919 zram_generator::config[1363]: No configuration found. Mar 3 13:48:07.353869 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:48:07.387074 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 3 13:48:07.387446 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 3 13:48:07.387466 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 3 13:48:07.409884 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:48:07.438163 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:48:07.438283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:48:07.442302 systemd[1]: Reloading finished in 375 ms. Mar 3 13:48:07.453478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:48:07.457480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:48:07.567690 systemd[1]: Finished ensure-sysext.service. Mar 3 13:48:07.572971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:48:07.580012 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:48:07.587167 kernel: kvm_amd: TSC scaling supported Mar 3 13:48:07.587202 kernel: kvm_amd: Nested Virtualization enabled Mar 3 13:48:07.587217 kernel: kvm_amd: Nested Paging enabled Mar 3 13:48:07.587252 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 3 13:48:07.587359 kernel: kvm_amd: PMU virtualization is disabled Mar 3 13:48:07.593599 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:48:07.597853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:48:07.635106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:48:07.639507 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:48:07.645000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:48:07.653018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:48:07.659013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:48:07.817361 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:48:07.823949 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:48:07.830270 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:48:07.847493 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:48:07.874662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:48:07.905475 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 3 13:48:07.938105 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:48:07.957127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:48:07.960348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:48:07.962059 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:48:07.969585 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:48:07.970037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:48:07.974728 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:48:07.975207 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:48:07.980987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:48:07.981890 kernel: EDAC MC: Ver: 3.0.0 Mar 3 13:48:07.981367 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:48:07.988973 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:48:07.989363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:48:07.993717 augenrules[1480]: No rules Mar 3 13:48:07.995665 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:48:07.996169 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:48:07.999280 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:48:08.003166 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:48:08.016435 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:48:08.016622 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:48:08.018710 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:48:08.021396 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:48:08.021560 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:48:08.022377 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:48:08.054085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:48:08.089523 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:48:08.219739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:48:08.311662 systemd-networkd[1461]: lo: Link UP Mar 3 13:48:08.311674 systemd-networkd[1461]: lo: Gained carrier Mar 3 13:48:08.314072 systemd-networkd[1461]: Enumeration completed Mar 3 13:48:08.314308 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:48:08.314778 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:48:08.314815 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:48:08.315679 systemd-networkd[1461]: eth0: Link UP Mar 3 13:48:08.316109 systemd-networkd[1461]: eth0: Gained carrier Mar 3 13:48:08.316150 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:48:08.320004 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:48:08.320506 systemd-resolved[1463]: Positive Trust Anchors: Mar 3 13:48:08.320515 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:48:08.320540 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:48:08.325365 systemd-resolved[1463]: Defaulting to hostname 'linux'. Mar 3 13:48:08.326026 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:48:08.329542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:48:08.334431 systemd[1]: Reached target network.target - Network. Mar 3 13:48:08.338283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:48:08.342936 systemd-networkd[1461]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:48:08.343205 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 3 13:48:08.346407 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:48:08.346561 systemd-timesyncd[1465]: Network configuration changed, trying to establish connection. Mar 3 13:48:08.783068 systemd-timesyncd[1465]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 3 13:48:08.783142 systemd-resolved[1463]: Clock change detected. Flushing caches. Mar 3 13:48:08.783203 systemd-timesyncd[1465]: Initial clock synchronization to Tue 2026-03-03 13:48:08.782973 UTC. Mar 3 13:48:08.784965 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:48:08.788715 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:48:08.792403 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:48:08.795523 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:48:08.799183 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:48:08.799226 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:48:08.802755 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:48:08.806880 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:48:08.811252 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:48:08.814923 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:48:08.818431 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:48:08.823766 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:48:08.829505 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:48:08.833240 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:48:08.836638 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:48:08.842551 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:48:08.846352 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:48:08.851654 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:48:08.855425 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:48:08.862028 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:48:08.864795 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:48:08.867539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:48:08.867664 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:48:08.869508 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:48:08.874672 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:48:08.879554 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:48:08.895231 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:48:08.899409 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:48:08.904341 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:48:08.906473 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:48:08.910704 jq[1521]: false Mar 3 13:48:08.914492 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:48:08.924660 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:48:08.929630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:48:08.931525 extend-filesystems[1522]: Found /dev/vda6 Mar 3 13:48:08.939512 extend-filesystems[1522]: Found /dev/vda9 Mar 3 13:48:08.939512 extend-filesystems[1522]: Checking size of /dev/vda9 Mar 3 13:48:08.936325 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:48:08.952783 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing passwd entry cache Mar 3 13:48:08.941973 oslogin_cache_refresh[1523]: Refreshing passwd entry cache Mar 3 13:48:08.953371 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:48:08.958365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:48:08.959055 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:48:08.959304 extend-filesystems[1522]: Resized partition /dev/vda9 Mar 3 13:48:08.960952 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:48:08.964205 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting users, quitting Mar 3 13:48:08.964205 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:48:08.964205 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Refreshing group entry cache Mar 3 13:48:08.963529 oslogin_cache_refresh[1523]: Failure getting users, quitting Mar 3 13:48:08.963556 oslogin_cache_refresh[1523]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:48:08.963666 oslogin_cache_refresh[1523]: Refreshing group entry cache Mar 3 13:48:08.966232 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:48:08.970612 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:48:08.972328 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:48:08.972658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:48:08.974034 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:48:08.979272 oslogin_cache_refresh[1523]: Failure getting groups, quitting Mar 3 13:48:08.981957 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Failure getting groups, quitting Mar 3 13:48:08.981957 google_oslogin_nss_cache[1523]: oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:48:08.975698 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:48:08.979290 oslogin_cache_refresh[1523]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:48:08.981608 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:48:08.985281 extend-filesystems[1548]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:48:09.005225 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 3 13:48:08.985459 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:48:09.015624 (ntainerd)[1553]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:48:09.017777 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:48:09.018315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:48:09.035701 tar[1542]: linux-amd64/LICENSE Mar 3 13:48:09.036061 tar[1542]: linux-amd64/helm Mar 3 13:48:09.076000 jq[1540]: true Mar 3 13:48:09.113908 jq[1562]: true Mar 3 13:48:09.122177 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 3 13:48:09.123931 dbus-daemon[1519]: [system] SELinux support is enabled Mar 3 13:48:09.124226 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:48:09.139365 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:48:09.148563 update_engine[1539]: I20260303 13:48:09.128918 1539 main.cc:92] Flatcar Update Engine starting Mar 3 13:48:09.139440 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:48:09.144545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:48:09.144621 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:48:09.149672 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 3 13:48:09.149672 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 3 13:48:09.149672 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 3 13:48:09.168818 update_engine[1539]: I20260303 13:48:09.149910 1539 update_check_scheduler.cc:74] Next update check in 9m39s Mar 3 13:48:09.150141 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:48:09.168894 extend-filesystems[1522]: Resized filesystem in /dev/vda9 Mar 3 13:48:09.172753 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:48:09.173907 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:48:09.340391 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:48:09.367048 systemd-logind[1535]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:48:09.367430 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:48:09.369711 systemd-logind[1535]: New seat seat0. Mar 3 13:48:09.381010 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:48:09.437897 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:48:09.438536 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:48:09.440645 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:48:09.448498 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 3 13:48:09.584567 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:48:09.627982 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:48:09.694300 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:48:09.724775 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:48:09.725141 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:48:09.733793 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:48:09.748000 containerd[1553]: time="2026-03-03T13:48:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:48:09.749145 containerd[1553]: time="2026-03-03T13:48:09.749060230Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:48:09.762486 containerd[1553]: time="2026-03-03T13:48:09.762427736Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.055µs" Mar 3 13:48:09.762486 containerd[1553]: time="2026-03-03T13:48:09.762467941Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:48:09.762486 containerd[1553]: time="2026-03-03T13:48:09.762485574Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:48:09.763122 containerd[1553]: time="2026-03-03T13:48:09.763038126Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:48:09.763163 containerd[1553]: time="2026-03-03T13:48:09.763127744Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:48:09.763163 containerd[1553]: time="2026-03-03T13:48:09.763157038Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763285 containerd[1553]: time="2026-03-03T13:48:09.763254630Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763285 containerd[1553]: time="2026-03-03T13:48:09.763280408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763662 containerd[1553]: time="2026-03-03T13:48:09.763566192Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763662 containerd[1553]: time="2026-03-03T13:48:09.763640120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763662 containerd[1553]: time="2026-03-03T13:48:09.763657693Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763662 containerd[1553]: time="2026-03-03T13:48:09.763666329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:48:09.763847 containerd[1553]: time="2026-03-03T13:48:09.763816800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:48:09.764236 containerd[1553]: time="2026-03-03T13:48:09.764193583Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:48:09.764268 containerd[1553]: time="2026-03-03T13:48:09.764244909Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:48:09.764268 containerd[1553]: time="2026-03-03T13:48:09.764254637Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:48:09.764342 containerd[1553]: time="2026-03-03T13:48:09.764317645Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:48:09.764603 containerd[1553]: time="2026-03-03T13:48:09.764554236Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:48:09.764753 containerd[1553]: time="2026-03-03T13:48:09.764707843Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:48:09.770988 containerd[1553]: time="2026-03-03T13:48:09.770947092Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:48:09.771191 containerd[1553]: time="2026-03-03T13:48:09.771020539Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:48:09.771191 containerd[1553]: time="2026-03-03T13:48:09.771039685Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:48:09.771191 containerd[1553]: time="2026-03-03T13:48:09.771056446Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:48:09.771191 containerd[1553]: time="2026-03-03T13:48:09.771144140Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:48:09.771191 containerd[1553]: time="2026-03-03T13:48:09.771167303Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:48:09.771191 containerd[1553]: time="2026-03-03T13:48:09.771189955Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:48:09.771348 containerd[1553]: time="2026-03-03T13:48:09.771208991Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:48:09.771348 containerd[1553]: time="2026-03-03T13:48:09.771226003Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:48:09.771348 containerd[1553]: time="2026-03-03T13:48:09.771240129Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:48:09.771348 containerd[1553]: time="2026-03-03T13:48:09.771255137Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:48:09.771348 containerd[1553]: time="2026-03-03T13:48:09.771276016Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:48:09.771498 containerd[1553]: time="2026-03-03T13:48:09.771458437Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:48:09.771633 containerd[1553]: time="2026-03-03T13:48:09.771531593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:48:09.771633 containerd[1553]: time="2026-03-03T13:48:09.771557351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:48:09.771669 containerd[1553]: time="2026-03-03T13:48:09.771644374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:48:09.771687 containerd[1553]: time="2026-03-03T13:48:09.771665343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:48:09.771854 containerd[1553]: time="2026-03-03T13:48:09.771755151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:48:09.771854 containerd[1553]: time="2026-03-03T13:48:09.771796067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:48:09.771854 containerd[1553]: time="2026-03-03T13:48:09.771814000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:48:09.771854 containerd[1553]: time="2026-03-03T13:48:09.771828948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:48:09.771854 containerd[1553]: time="2026-03-03T13:48:09.771843325Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:48:09.772658 containerd[1553]: time="2026-03-03T13:48:09.771857652Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:48:09.772658 containerd[1553]: time="2026-03-03T13:48:09.771924597Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:48:09.772658 containerd[1553]: time="2026-03-03T13:48:09.771944043Z" level=info msg="Start snapshots syncer" Mar 3 13:48:09.772658 containerd[1553]: time="2026-03-03T13:48:09.772015166Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:48:09.772730 containerd[1553]: time="2026-03-03T13:48:09.772617681Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:48:09.772730 containerd[1553]: time="2026-03-03T13:48:09.772683103Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:48:09.776376 containerd[1553]: time="2026-03-03T13:48:09.776300517Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:48:09.776623 containerd[1553]: time="2026-03-03T13:48:09.776562105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:48:09.776656 containerd[1553]: time="2026-03-03T13:48:09.776647905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:48:09.776677 containerd[1553]: time="2026-03-03T13:48:09.776662022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:48:09.776677 containerd[1553]: time="2026-03-03T13:48:09.776671890Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:48:09.776709 containerd[1553]: time="2026-03-03T13:48:09.776685746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:48:09.776709 containerd[1553]: time="2026-03-03T13:48:09.776695794Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:48:09.776709 containerd[1553]: time="2026-03-03T13:48:09.776705353Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:48:09.776806 containerd[1553]: time="2026-03-03T13:48:09.776724368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:48:09.776806 containerd[1553]: time="2026-03-03T13:48:09.776734947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:48:09.776806 containerd[1553]: time="2026-03-03T13:48:09.776771276Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:48:09.776858 containerd[1553]: time="2026-03-03T13:48:09.776833732Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:48:09.776858 containerd[1553]: time="2026-03-03T13:48:09.776848820Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:48:09.776896 containerd[1553]: time="2026-03-03T13:48:09.776856645Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.776943307Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.776956271Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.776967141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.777013107Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.777031361Z" level=info msg="runtime interface created" Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.777037793Z" level=info msg="created NRI interface" Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.777045588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.777055677Z" level=info msg="Connect containerd service" Mar 3 13:48:09.777479 containerd[1553]: time="2026-03-03T13:48:09.777121980Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:48:09.778367 containerd[1553]: time="2026-03-03T13:48:09.778321820Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:48:09.786215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:48:09.794530 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:48:09.855507 kernel: hrtimer: interrupt took 23062627 ns Mar 3 13:48:09.862180 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:48:09.867957 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:48:10.295736 tar[1542]: linux-amd64/README.md Mar 3 13:48:10.337820 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:48:10.432694 systemd-networkd[1461]: eth0: Gained IPv6LL Mar 3 13:48:10.437054 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:48:10.442429 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:48:10.448429 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 3 13:48:10.453220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:48:10.465484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:48:10.499993 containerd[1553]: time="2026-03-03T13:48:10.499671305Z" level=info msg="Start subscribing containerd event" Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500329313Z" level=info msg="Start recovering state" Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500721596Z" level=info msg="Start event monitor" Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500775777Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500786908Z" level=info msg="Start streaming server" Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500805232Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500812626Z" level=info msg="runtime interface starting up..." Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500819208Z" level=info msg="starting plugins..." Mar 3 13:48:10.500974 containerd[1553]: time="2026-03-03T13:48:10.500890732Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:48:10.502769 containerd[1553]: time="2026-03-03T13:48:10.502668634Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:48:10.502889 containerd[1553]: time="2026-03-03T13:48:10.502820277Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:48:10.509911 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:48:10.512277 containerd[1553]: time="2026-03-03T13:48:10.510920326Z" level=info msg="containerd successfully booted in 0.763486s" Mar 3 13:48:10.518510 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 3 13:48:10.520312 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 3 13:48:10.525357 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:48:10.530105 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:48:12.145648 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:48:12.151835 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:40158.service - OpenSSH per-connection server daemon (10.0.0.1:40158). Mar 3 13:48:12.264443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:12.267700 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 40158 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:12.267427 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:12.268852 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:48:12.271641 systemd[1]: Startup finished in 3.384s (kernel) + 7.106s (initrd) + 7.091s (userspace) = 17.582s. Mar 3 13:48:12.276295 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:48:12.415773 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:48:12.420564 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:48:12.441817 systemd-logind[1535]: New session 1 of user core. Mar 3 13:48:12.493509 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:48:12.498194 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:48:12.513133 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:48:12.517164 systemd-logind[1535]: New session c1 of user core. Mar 3 13:48:12.676609 systemd[1664]: Queued start job for default target default.target. Mar 3 13:48:12.694276 systemd[1664]: Created slice app.slice - User Application Slice. Mar 3 13:48:12.694332 systemd[1664]: Reached target paths.target - Paths. Mar 3 13:48:12.694429 systemd[1664]: Reached target timers.target - Timers. Mar 3 13:48:12.696817 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:48:12.737055 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:48:12.737313 systemd[1664]: Reached target sockets.target - Sockets. Mar 3 13:48:12.737406 systemd[1664]: Reached target basic.target - Basic System. Mar 3 13:48:12.737485 systemd[1664]: Reached target default.target - Main User Target. Mar 3 13:48:12.737536 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:48:12.737542 systemd[1664]: Startup finished in 212ms. Mar 3 13:48:12.744275 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:48:12.771423 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:40160.service - OpenSSH per-connection server daemon (10.0.0.1:40160). Mar 3 13:48:12.906341 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:12.908520 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:12.914462 systemd-logind[1535]: New session 2 of user core. Mar 3 13:48:12.923243 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:48:12.943232 sshd[1684]: Connection closed by 10.0.0.1 port 40160 Mar 3 13:48:12.944315 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:12.956248 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:40160.service: Deactivated successfully. Mar 3 13:48:12.958804 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:48:12.961059 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:48:12.963450 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:40168.service - OpenSSH per-connection server daemon (10.0.0.1:40168). Mar 3 13:48:12.965177 systemd-logind[1535]: Removed session 2. Mar 3 13:48:13.044614 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 40168 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:13.046705 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:13.054141 systemd-logind[1535]: New session 3 of user core. Mar 3 13:48:13.061237 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:48:13.078276 sshd[1694]: Connection closed by 10.0.0.1 port 40168 Mar 3 13:48:13.079759 kubelet[1657]: E0303 13:48:13.079702 1657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:48:13.080329 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:13.093050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:48:13.093423 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:48:13.094021 systemd[1]: kubelet.service: Consumed 2.271s CPU time, 257.9M memory peak. Mar 3 13:48:13.094840 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:40168.service: Deactivated successfully. Mar 3 13:48:13.097518 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:48:13.099802 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:48:13.105030 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:40182.service - OpenSSH per-connection server daemon (10.0.0.1:40182). Mar 3 13:48:13.105927 systemd-logind[1535]: Removed session 3. Mar 3 13:48:13.178253 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 40182 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:13.179951 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:13.186322 systemd-logind[1535]: New session 4 of user core. Mar 3 13:48:13.200378 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:48:13.215966 sshd[1704]: Connection closed by 10.0.0.1 port 40182 Mar 3 13:48:13.216289 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:13.234829 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:40182.service: Deactivated successfully. Mar 3 13:48:13.237507 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:48:13.238934 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:48:13.242323 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:40198.service - OpenSSH per-connection server daemon (10.0.0.1:40198). Mar 3 13:48:13.243821 systemd-logind[1535]: Removed session 4. Mar 3 13:48:13.303661 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 40198 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:13.305043 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:13.310747 systemd-logind[1535]: New session 5 of user core. Mar 3 13:48:13.321256 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:48:13.343394 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:48:13.343781 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:48:13.365450 sudo[1714]: pam_unix(sudo:session): session closed for user root Mar 3 13:48:13.366883 sshd[1713]: Connection closed by 10.0.0.1 port 40198 Mar 3 13:48:13.367437 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:13.381795 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:40198.service: Deactivated successfully. Mar 3 13:48:13.383499 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:48:13.384511 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:48:13.386946 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:40204.service - OpenSSH per-connection server daemon (10.0.0.1:40204). Mar 3 13:48:13.388360 systemd-logind[1535]: Removed session 5. Mar 3 13:48:13.439162 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 40204 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:13.440509 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:13.445821 systemd-logind[1535]: New session 6 of user core. Mar 3 13:48:13.457241 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:48:13.471829 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:48:13.472265 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:48:13.479269 sudo[1725]: pam_unix(sudo:session): session closed for user root Mar 3 13:48:13.486392 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:48:13.486821 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:48:13.497955 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:48:13.559746 augenrules[1747]: No rules Mar 3 13:48:13.561463 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:48:13.561828 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:48:13.562832 sudo[1724]: pam_unix(sudo:session): session closed for user root Mar 3 13:48:13.564375 sshd[1723]: Connection closed by 10.0.0.1 port 40204 Mar 3 13:48:13.564749 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:13.577490 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:40204.service: Deactivated successfully. Mar 3 13:48:13.579328 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:48:13.580256 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:48:13.582946 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:40208.service - OpenSSH per-connection server daemon (10.0.0.1:40208). Mar 3 13:48:13.583769 systemd-logind[1535]: Removed session 6. Mar 3 13:48:13.632656 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 40208 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:48:13.634045 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:48:13.639576 systemd-logind[1535]: New session 7 of user core. Mar 3 13:48:13.646244 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:48:13.660137 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:48:13.660485 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:48:15.196746 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:48:15.226759 (dockerd)[1782]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:48:16.275625 dockerd[1782]: time="2026-03-03T13:48:16.275438980Z" level=info msg="Starting up" Mar 3 13:48:16.277679 dockerd[1782]: time="2026-03-03T13:48:16.277618010Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:48:16.325985 dockerd[1782]: time="2026-03-03T13:48:16.325908086Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:48:16.496960 dockerd[1782]: time="2026-03-03T13:48:16.496864608Z" level=info msg="Loading containers: start." Mar 3 13:48:16.514186 kernel: Initializing XFRM netlink socket Mar 3 13:48:16.993338 systemd-networkd[1461]: docker0: Link UP Mar 3 13:48:17.005156 dockerd[1782]: time="2026-03-03T13:48:17.005053163Z" level=info msg="Loading containers: done." Mar 3 13:48:17.035304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2199997851-merged.mount: Deactivated successfully. Mar 3 13:48:17.036853 dockerd[1782]: time="2026-03-03T13:48:17.036802268Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:48:17.037021 dockerd[1782]: time="2026-03-03T13:48:17.036944263Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:48:17.037252 dockerd[1782]: time="2026-03-03T13:48:17.037183961Z" level=info msg="Initializing buildkit" Mar 3 13:48:17.085211 dockerd[1782]: time="2026-03-03T13:48:17.085066565Z" level=info msg="Completed buildkit initialization" Mar 3 13:48:17.095262 dockerd[1782]: time="2026-03-03T13:48:17.095160349Z" level=info msg="Daemon has completed initialization" Mar 3 13:48:17.095826 dockerd[1782]: time="2026-03-03T13:48:17.095516323Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:48:17.096130 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:48:18.291225 containerd[1553]: time="2026-03-03T13:48:18.290954021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 3 13:48:19.231353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495730442.mount: Deactivated successfully. Mar 3 13:48:20.470275 containerd[1553]: time="2026-03-03T13:48:20.470207073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:20.471030 containerd[1553]: time="2026-03-03T13:48:20.470965666Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 3 13:48:20.472322 containerd[1553]: time="2026-03-03T13:48:20.472266828Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:20.475292 containerd[1553]: time="2026-03-03T13:48:20.475223605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:20.476371 containerd[1553]: time="2026-03-03T13:48:20.476295729Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.185092002s" Mar 3 13:48:20.476371 containerd[1553]: time="2026-03-03T13:48:20.476352376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 3 13:48:20.479547 containerd[1553]: time="2026-03-03T13:48:20.479297872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 3 13:48:21.746332 containerd[1553]: time="2026-03-03T13:48:21.746242873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:21.747142 containerd[1553]: time="2026-03-03T13:48:21.747104166Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 3 13:48:21.748512 containerd[1553]: time="2026-03-03T13:48:21.748422279Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:21.751310 containerd[1553]: time="2026-03-03T13:48:21.751228168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:21.752330 containerd[1553]: time="2026-03-03T13:48:21.752288034Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.272957781s" Mar 3 13:48:21.752330 containerd[1553]: time="2026-03-03T13:48:21.752327689Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 3 13:48:21.753433 containerd[1553]: time="2026-03-03T13:48:21.753210511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 3 13:48:23.030873 containerd[1553]: time="2026-03-03T13:48:23.030765739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:23.031859 containerd[1553]: time="2026-03-03T13:48:23.031798748Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 3 13:48:23.033105 containerd[1553]: time="2026-03-03T13:48:23.032964424Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:23.035675 containerd[1553]: time="2026-03-03T13:48:23.035599504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:23.036505 containerd[1553]: time="2026-03-03T13:48:23.036467348Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.283212574s" Mar 3 13:48:23.036505 containerd[1553]: time="2026-03-03T13:48:23.036504368Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 3 13:48:23.037237 containerd[1553]: time="2026-03-03T13:48:23.037174438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 3 13:48:23.344723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:48:23.346721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:48:23.723567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:23.735695 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:48:23.796733 kubelet[2078]: E0303 13:48:23.796556 2078 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:48:23.801567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:48:23.801871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:48:23.802946 systemd[1]: kubelet.service: Consumed 365ms CPU time, 110.4M memory peak. Mar 3 13:48:24.072600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122609220.mount: Deactivated successfully. Mar 3 13:48:24.303444 containerd[1553]: time="2026-03-03T13:48:24.303352791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:24.304431 containerd[1553]: time="2026-03-03T13:48:24.304363315Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 3 13:48:24.305614 containerd[1553]: time="2026-03-03T13:48:24.305556355Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:24.307981 containerd[1553]: time="2026-03-03T13:48:24.307911862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:24.308402 containerd[1553]: time="2026-03-03T13:48:24.308355736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.271143727s" Mar 3 13:48:24.308402 containerd[1553]: time="2026-03-03T13:48:24.308394379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 3 13:48:24.309350 containerd[1553]: time="2026-03-03T13:48:24.309298709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 3 13:48:24.747284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983930611.mount: Deactivated successfully. Mar 3 13:48:25.609679 containerd[1553]: time="2026-03-03T13:48:25.609581598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:25.610629 containerd[1553]: time="2026-03-03T13:48:25.610584898Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 3 13:48:25.612271 containerd[1553]: time="2026-03-03T13:48:25.612137317Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:25.615367 containerd[1553]: time="2026-03-03T13:48:25.615334114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:25.616906 containerd[1553]: time="2026-03-03T13:48:25.616856131Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.307508511s" Mar 3 13:48:25.616906 containerd[1553]: time="2026-03-03T13:48:25.616901687Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 3 13:48:25.617730 containerd[1553]: time="2026-03-03T13:48:25.617535409Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 13:48:26.007728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586378497.mount: Deactivated successfully. Mar 3 13:48:26.014364 containerd[1553]: time="2026-03-03T13:48:26.014268804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:26.015342 containerd[1553]: time="2026-03-03T13:48:26.015312880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 3 13:48:26.016552 containerd[1553]: time="2026-03-03T13:48:26.016489259Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:26.019053 containerd[1553]: time="2026-03-03T13:48:26.018991541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:26.019552 containerd[1553]: time="2026-03-03T13:48:26.019484777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 401.922076ms" Mar 3 13:48:26.019552 containerd[1553]: time="2026-03-03T13:48:26.019524571Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 3 13:48:26.020511 containerd[1553]: time="2026-03-03T13:48:26.020490314Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 3 13:48:26.466057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount522519371.mount: Deactivated successfully. Mar 3 13:48:27.318940 containerd[1553]: time="2026-03-03T13:48:27.318854289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:27.319942 containerd[1553]: time="2026-03-03T13:48:27.319880208Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 3 13:48:27.321058 containerd[1553]: time="2026-03-03T13:48:27.320990407Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:27.324240 containerd[1553]: time="2026-03-03T13:48:27.324169773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:27.325676 containerd[1553]: time="2026-03-03T13:48:27.325556641Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.305039717s" Mar 3 13:48:27.325676 containerd[1553]: time="2026-03-03T13:48:27.325602035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 3 13:48:29.657279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:29.657548 systemd[1]: kubelet.service: Consumed 365ms CPU time, 110.4M memory peak. Mar 3 13:48:29.660333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:48:29.690508 systemd[1]: Reload requested from client PID 2240 ('systemctl') (unit session-7.scope)... Mar 3 13:48:29.690538 systemd[1]: Reloading... Mar 3 13:48:29.777177 zram_generator::config[2281]: No configuration found. Mar 3 13:48:30.023535 systemd[1]: Reloading finished in 332 ms. Mar 3 13:48:30.104014 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 13:48:30.104242 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 13:48:30.104709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:30.104798 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.3M memory peak. Mar 3 13:48:30.107067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:48:30.334022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:30.348757 (kubelet)[2330]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:48:30.494310 kubelet[2330]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:48:30.494310 kubelet[2330]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:48:30.494792 kubelet[2330]: I0303 13:48:30.494359 2330 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:48:31.029841 kubelet[2330]: I0303 13:48:31.029806 2330 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:48:31.029841 kubelet[2330]: I0303 13:48:31.029830 2330 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:48:31.032812 kubelet[2330]: I0303 13:48:31.032743 2330 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:48:31.032812 kubelet[2330]: I0303 13:48:31.032793 2330 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:48:31.033048 kubelet[2330]: I0303 13:48:31.032991 2330 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:48:31.070565 kubelet[2330]: I0303 13:48:31.070448 2330 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:48:31.071410 kubelet[2330]: E0303 13:48:31.071355 2330 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:48:31.075167 kubelet[2330]: I0303 13:48:31.075137 2330 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:48:31.084457 kubelet[2330]: I0303 13:48:31.084401 2330 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:48:31.086119 kubelet[2330]: I0303 13:48:31.085984 2330 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:48:31.086404 kubelet[2330]: I0303 13:48:31.086042 2330 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:48:31.086532 kubelet[2330]: I0303 13:48:31.086440 2330 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:48:31.086532 kubelet[2330]: I0303 13:48:31.086455 2330 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:48:31.086690 kubelet[2330]: I0303 13:48:31.086612 2330 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:48:31.090068 kubelet[2330]: I0303 13:48:31.090026 2330 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:48:31.090425 kubelet[2330]: I0303 13:48:31.090376 2330 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:48:31.090425 kubelet[2330]: I0303 13:48:31.090408 2330 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:48:31.090640 kubelet[2330]: I0303 13:48:31.090521 2330 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:48:31.090640 kubelet[2330]: I0303 13:48:31.090581 2330 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:48:31.091691 kubelet[2330]: E0303 13:48:31.091591 2330 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:48:31.091815 kubelet[2330]: E0303 13:48:31.091764 2330 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:48:31.093119 kubelet[2330]: I0303 13:48:31.093028 2330 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:48:31.093963 kubelet[2330]: I0303 13:48:31.093873 2330 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:48:31.093963 kubelet[2330]: I0303 13:48:31.093928 2330 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:48:31.094058 kubelet[2330]: W0303 13:48:31.094036 2330 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 13:48:31.098735 kubelet[2330]: I0303 13:48:31.098634 2330 server.go:1262] "Started kubelet" Mar 3 13:48:31.100170 kubelet[2330]: I0303 13:48:31.099599 2330 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:48:31.100170 kubelet[2330]: I0303 13:48:31.099749 2330 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:48:31.100506 kubelet[2330]: I0303 13:48:31.100435 2330 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:48:31.100618 kubelet[2330]: I0303 13:48:31.100578 2330 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:48:31.102311 kubelet[2330]: I0303 13:48:31.102232 2330 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:48:31.104129 kubelet[2330]: I0303 13:48:31.103870 2330 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:48:31.105711 kubelet[2330]: I0303 13:48:31.105629 2330 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:48:31.106324 kubelet[2330]: E0303 13:48:31.105305 2330 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189958eed22beacc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:48:31.098555084 +0000 UTC m=+0.744683010,LastTimestamp:2026-03-03 13:48:31.098555084 +0000 UTC m=+0.744683010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:48:31.107454 kubelet[2330]: I0303 13:48:31.107409 2330 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:48:31.107562 kubelet[2330]: E0303 13:48:31.107524 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Mar 3 13:48:31.107562 kubelet[2330]: E0303 13:48:31.107543 2330 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:48:31.107700 kubelet[2330]: I0303 13:48:31.107628 2330 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:48:31.108177 kubelet[2330]: I0303 13:48:31.107825 2330 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:48:31.108689 kubelet[2330]: E0303 13:48:31.108597 2330 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:48:31.109219 kubelet[2330]: I0303 13:48:31.109164 2330 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:48:31.109374 kubelet[2330]: I0303 13:48:31.109313 2330 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:48:31.110693 kubelet[2330]: E0303 13:48:31.110624 2330 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:48:31.110815 kubelet[2330]: I0303 13:48:31.110759 2330 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:48:31.128316 kubelet[2330]: I0303 13:48:31.128259 2330 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:48:31.128410 kubelet[2330]: I0303 13:48:31.128290 2330 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:48:31.128410 kubelet[2330]: I0303 13:48:31.128344 2330 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:48:31.131231 kubelet[2330]: I0303 13:48:31.131197 2330 policy_none.go:49] "None policy: Start" Mar 3 13:48:31.131296 kubelet[2330]: I0303 13:48:31.131277 2330 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:48:31.131330 kubelet[2330]: I0303 13:48:31.131304 2330 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:48:31.133812 kubelet[2330]: I0303 13:48:31.133387 2330 policy_none.go:47] "Start" Mar 3 13:48:31.138801 kubelet[2330]: I0303 13:48:31.138758 2330 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:48:31.140419 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 13:48:31.141620 kubelet[2330]: I0303 13:48:31.141548 2330 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:48:31.141702 kubelet[2330]: I0303 13:48:31.141651 2330 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:48:31.141750 kubelet[2330]: I0303 13:48:31.141742 2330 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:48:31.141817 kubelet[2330]: E0303 13:48:31.141785 2330 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:48:31.142452 kubelet[2330]: E0303 13:48:31.142378 2330 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:48:31.154544 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 13:48:31.158960 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 13:48:31.172033 kubelet[2330]: E0303 13:48:31.171989 2330 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:48:31.172463 kubelet[2330]: I0303 13:48:31.172363 2330 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:48:31.172463 kubelet[2330]: I0303 13:48:31.172408 2330 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:48:31.173449 kubelet[2330]: I0303 13:48:31.173429 2330 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:48:31.175402 kubelet[2330]: E0303 13:48:31.174986 2330 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:48:31.175402 kubelet[2330]: E0303 13:48:31.175211 2330 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 13:48:31.258147 systemd[1]: Created slice kubepods-burstable-pod09aeec0f76c28ab963055694829e2edd.slice - libcontainer container kubepods-burstable-pod09aeec0f76c28ab963055694829e2edd.slice. Mar 3 13:48:31.275400 kubelet[2330]: E0303 13:48:31.275278 2330 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:48:31.276155 kubelet[2330]: I0303 13:48:31.276126 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:48:31.276633 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 3 13:48:31.277337 kubelet[2330]: E0303 13:48:31.277001 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Mar 3 13:48:31.280058 kubelet[2330]: E0303 13:48:31.279942 2330 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:48:31.282447 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 3 13:48:31.285065 kubelet[2330]: E0303 13:48:31.285011 2330 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:48:31.308892 kubelet[2330]: E0303 13:48:31.308811 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Mar 3 13:48:31.409510 kubelet[2330]: I0303 13:48:31.409448 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09aeec0f76c28ab963055694829e2edd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"09aeec0f76c28ab963055694829e2edd\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:31.409720 kubelet[2330]: I0303 13:48:31.409584 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09aeec0f76c28ab963055694829e2edd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"09aeec0f76c28ab963055694829e2edd\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:31.409720 kubelet[2330]: I0303 13:48:31.409623 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09aeec0f76c28ab963055694829e2edd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"09aeec0f76c28ab963055694829e2edd\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:31.409720 kubelet[2330]: I0303 13:48:31.409690 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:31.409797 kubelet[2330]: I0303 13:48:31.409722 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:31.409797 kubelet[2330]: I0303 13:48:31.409755 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:31.409797 kubelet[2330]: I0303 13:48:31.409779 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:31.409860 kubelet[2330]: I0303 13:48:31.409803 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:31.409860 kubelet[2330]: I0303 13:48:31.409826 2330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:31.479735 kubelet[2330]: I0303 13:48:31.479433 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:48:31.479940 kubelet[2330]: E0303 13:48:31.479898 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Mar 3 13:48:31.580727 kubelet[2330]: E0303 13:48:31.580479 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:31.582121 containerd[1553]: time="2026-03-03T13:48:31.581853659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:09aeec0f76c28ab963055694829e2edd,Namespace:kube-system,Attempt:0,}" Mar 3 13:48:31.583587 kubelet[2330]: E0303 13:48:31.583546 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:31.584123 containerd[1553]: time="2026-03-03T13:48:31.584028974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 3 13:48:31.588136 kubelet[2330]: E0303 13:48:31.587897 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:31.588379 containerd[1553]: time="2026-03-03T13:48:31.588281376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 3 13:48:31.710313 kubelet[2330]: E0303 13:48:31.710222 2330 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Mar 3 13:48:31.881812 kubelet[2330]: I0303 13:48:31.881573 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:48:31.881972 kubelet[2330]: E0303 13:48:31.881927 2330 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Mar 3 13:48:32.018798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636694260.mount: Deactivated successfully. Mar 3 13:48:32.025904 containerd[1553]: time="2026-03-03T13:48:32.025843423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:48:32.028019 containerd[1553]: time="2026-03-03T13:48:32.027923056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 13:48:32.033809 containerd[1553]: time="2026-03-03T13:48:32.033753958Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:48:32.035328 containerd[1553]: time="2026-03-03T13:48:32.035201704Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:48:32.036730 containerd[1553]: time="2026-03-03T13:48:32.036600010Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:48:32.037326 containerd[1553]: time="2026-03-03T13:48:32.037271859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:48:32.038518 containerd[1553]: time="2026-03-03T13:48:32.038490995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:48:32.039898 containerd[1553]: time="2026-03-03T13:48:32.039250480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:48:32.039898 containerd[1553]: time="2026-03-03T13:48:32.039259249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 449.686112ms" Mar 3 13:48:32.042720 containerd[1553]: time="2026-03-03T13:48:32.042680607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 458.721203ms" Mar 3 13:48:32.043561 containerd[1553]: time="2026-03-03T13:48:32.043493715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 456.487359ms" Mar 3 13:48:32.098171 containerd[1553]: time="2026-03-03T13:48:32.098120254Z" level=info msg="connecting to shim 5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba" address="unix:///run/containerd/s/fc316ce9c5c7bc955ff2fc08d99a78d98e3f5e7dd38e03428134a1d712bfa786" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:48:32.098933 containerd[1553]: time="2026-03-03T13:48:32.098865306Z" level=info msg="connecting to shim 61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09" address="unix:///run/containerd/s/2358e42243dd3b71465d6e8a26d4f6b3f4f3dba5f8945ceca57ed18d5d8189d0" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:48:32.106463 containerd[1553]: time="2026-03-03T13:48:32.106378522Z" level=info msg="connecting to shim f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77" address="unix:///run/containerd/s/021f7abf619e6e445ec05910eeff5affd91a16c4c03a9da76e07f6c2605ff48c" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:48:32.134272 systemd[1]: Started cri-containerd-61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09.scope - libcontainer container 61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09. Mar 3 13:48:32.139866 systemd[1]: Started cri-containerd-5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba.scope - libcontainer container 5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba. Mar 3 13:48:32.142201 systemd[1]: Started cri-containerd-f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77.scope - libcontainer container f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77. Mar 3 13:48:32.207018 containerd[1553]: time="2026-03-03T13:48:32.206948983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba\"" Mar 3 13:48:32.208603 containerd[1553]: time="2026-03-03T13:48:32.208569887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:09aeec0f76c28ab963055694829e2edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09\"" Mar 3 13:48:32.210009 kubelet[2330]: E0303 13:48:32.209962 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:32.210838 kubelet[2330]: E0303 13:48:32.210644 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:32.217702 containerd[1553]: time="2026-03-03T13:48:32.217635797Z" level=info msg="CreateContainer within sandbox \"5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 13:48:32.218274 containerd[1553]: time="2026-03-03T13:48:32.218164409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77\"" Mar 3 13:48:32.219778 kubelet[2330]: E0303 13:48:32.219564 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:32.220654 containerd[1553]: time="2026-03-03T13:48:32.220610337Z" level=info msg="CreateContainer within sandbox \"61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 13:48:32.224595 containerd[1553]: time="2026-03-03T13:48:32.224511050Z" level=info msg="CreateContainer within sandbox \"f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 13:48:32.229846 containerd[1553]: time="2026-03-03T13:48:32.229809567Z" level=info msg="Container 3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:48:32.234905 containerd[1553]: time="2026-03-03T13:48:32.234836737Z" level=info msg="Container d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:48:32.241764 containerd[1553]: time="2026-03-03T13:48:32.241701083Z" level=info msg="CreateContainer within sandbox \"5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500\"" Mar 3 13:48:32.244142 containerd[1553]: time="2026-03-03T13:48:32.242372935Z" level=info msg="StartContainer for \"3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500\"" Mar 3 13:48:32.244142 containerd[1553]: time="2026-03-03T13:48:32.243791383Z" level=info msg="connecting to shim 3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500" address="unix:///run/containerd/s/fc316ce9c5c7bc955ff2fc08d99a78d98e3f5e7dd38e03428134a1d712bfa786" protocol=ttrpc version=3 Mar 3 13:48:32.244541 containerd[1553]: time="2026-03-03T13:48:32.244481412Z" level=info msg="Container 95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:48:32.274282 containerd[1553]: time="2026-03-03T13:48:32.274161454Z" level=info msg="CreateContainer within sandbox \"61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a\"" Mar 3 13:48:32.274280 systemd[1]: Started cri-containerd-3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500.scope - libcontainer container 3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500. Mar 3 13:48:32.275294 containerd[1553]: time="2026-03-03T13:48:32.275264394Z" level=info msg="StartContainer for \"d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a\"" Mar 3 13:48:32.278125 containerd[1553]: time="2026-03-03T13:48:32.278056566Z" level=info msg="connecting to shim d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a" address="unix:///run/containerd/s/2358e42243dd3b71465d6e8a26d4f6b3f4f3dba5f8945ceca57ed18d5d8189d0" protocol=ttrpc version=3 Mar 3 13:48:32.279847 containerd[1553]: time="2026-03-03T13:48:32.279782689Z" level=info msg="CreateContainer within sandbox \"f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400\"" Mar 3 13:48:32.280494 containerd[1553]: time="2026-03-03T13:48:32.280426681Z" level=info msg="StartContainer for \"95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400\"" Mar 3 13:48:32.281368 containerd[1553]: time="2026-03-03T13:48:32.281319989Z" level=info msg="connecting to shim 95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400" address="unix:///run/containerd/s/021f7abf619e6e445ec05910eeff5affd91a16c4c03a9da76e07f6c2605ff48c" protocol=ttrpc version=3 Mar 3 13:48:32.315248 systemd[1]: Started cri-containerd-d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a.scope - libcontainer container d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a. Mar 3 13:48:32.329359 systemd[1]: Started cri-containerd-95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400.scope - libcontainer container 95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400. Mar 3 13:48:32.349698 containerd[1553]: time="2026-03-03T13:48:32.349393145Z" level=info msg="StartContainer for \"3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500\" returns successfully" Mar 3 13:48:32.390705 containerd[1553]: time="2026-03-03T13:48:32.390495683Z" level=info msg="StartContainer for \"d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a\" returns successfully" Mar 3 13:48:32.402172 kubelet[2330]: E0303 13:48:32.402145 2330 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:48:32.405052 containerd[1553]: time="2026-03-03T13:48:32.404998860Z" level=info msg="StartContainer for \"95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400\" returns successfully" Mar 3 13:48:32.685914 kubelet[2330]: I0303 13:48:32.685734 2330 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:48:33.159957 kubelet[2330]: E0303 13:48:33.159798 2330 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:48:33.160257 kubelet[2330]: E0303 13:48:33.159962 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:33.163647 kubelet[2330]: E0303 13:48:33.163592 2330 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:48:33.164491 kubelet[2330]: E0303 13:48:33.164441 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:33.165497 kubelet[2330]: E0303 13:48:33.165447 2330 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:48:33.167125 kubelet[2330]: E0303 13:48:33.165567 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:33.593148 kubelet[2330]: E0303 13:48:33.592917 2330 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 3 13:48:33.672582 kubelet[2330]: I0303 13:48:33.672507 2330 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:48:33.708273 kubelet[2330]: I0303 13:48:33.708173 2330 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:33.764632 kubelet[2330]: E0303 13:48:33.764496 2330 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:33.764632 kubelet[2330]: I0303 13:48:33.764567 2330 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:33.767059 kubelet[2330]: E0303 13:48:33.766966 2330 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:33.767059 kubelet[2330]: I0303 13:48:33.767027 2330 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:33.768848 kubelet[2330]: E0303 13:48:33.768796 2330 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:34.093757 kubelet[2330]: I0303 13:48:34.093541 2330 apiserver.go:52] "Watching apiserver" Mar 3 13:48:34.108394 kubelet[2330]: I0303 13:48:34.108324 2330 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:48:34.166134 kubelet[2330]: I0303 13:48:34.166046 2330 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:34.167778 kubelet[2330]: I0303 13:48:34.166294 2330 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:34.168257 kubelet[2330]: E0303 13:48:34.168187 2330 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:34.168257 kubelet[2330]: E0303 13:48:34.168228 2330 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:34.168400 kubelet[2330]: E0303 13:48:34.168374 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:34.168454 kubelet[2330]: E0303 13:48:34.168420 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:35.317060 kubelet[2330]: I0303 13:48:35.316930 2330 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:35.322574 kubelet[2330]: E0303 13:48:35.322497 2330 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:35.796038 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... Mar 3 13:48:35.796068 systemd[1]: Reloading... Mar 3 13:48:35.884268 zram_generator::config[2661]: No configuration found. Mar 3 13:48:36.121314 systemd[1]: Reloading finished in 324 ms. Mar 3 13:48:36.154619 kubelet[2330]: I0303 13:48:36.154499 2330 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:48:36.154811 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:48:36.171025 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 13:48:36.171517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:36.171606 systemd[1]: kubelet.service: Consumed 1.356s CPU time, 127.5M memory peak. Mar 3 13:48:36.174363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:48:36.386843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:48:36.398618 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:48:36.465114 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:48:36.465114 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:48:36.465852 kubelet[2706]: I0303 13:48:36.465710 2706 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:48:36.473503 kubelet[2706]: I0303 13:48:36.473453 2706 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:48:36.473503 kubelet[2706]: I0303 13:48:36.473487 2706 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:48:36.473579 kubelet[2706]: I0303 13:48:36.473513 2706 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:48:36.473579 kubelet[2706]: I0303 13:48:36.473524 2706 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:48:36.473806 kubelet[2706]: I0303 13:48:36.473762 2706 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:48:36.474877 kubelet[2706]: I0303 13:48:36.474840 2706 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 13:48:36.477545 kubelet[2706]: I0303 13:48:36.477498 2706 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:48:36.481561 kubelet[2706]: I0303 13:48:36.481507 2706 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:48:36.490697 kubelet[2706]: I0303 13:48:36.490624 2706 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:48:36.491167 kubelet[2706]: I0303 13:48:36.491065 2706 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:48:36.491337 kubelet[2706]: I0303 13:48:36.491150 2706 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:48:36.491337 kubelet[2706]: I0303 13:48:36.491317 2706 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:48:36.491337 kubelet[2706]: I0303 13:48:36.491327 2706 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:48:36.491485 kubelet[2706]: I0303 13:48:36.491361 2706 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:48:36.491579 kubelet[2706]: I0303 13:48:36.491546 2706 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:48:36.491967 kubelet[2706]: I0303 13:48:36.491908 2706 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:48:36.491967 kubelet[2706]: I0303 13:48:36.491946 2706 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:48:36.492030 kubelet[2706]: I0303 13:48:36.491981 2706 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:48:36.492030 kubelet[2706]: I0303 13:48:36.492006 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:48:36.493583 kubelet[2706]: I0303 13:48:36.493565 2706 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:48:36.494708 kubelet[2706]: I0303 13:48:36.494362 2706 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:48:36.494708 kubelet[2706]: I0303 13:48:36.494389 2706 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:48:36.498862 kubelet[2706]: I0303 13:48:36.498821 2706 server.go:1262] "Started kubelet" Mar 3 13:48:36.501761 kubelet[2706]: I0303 13:48:36.501746 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:48:36.503373 kubelet[2706]: I0303 13:48:36.503327 2706 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:48:36.505513 kubelet[2706]: I0303 13:48:36.505384 2706 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:48:36.505513 kubelet[2706]: I0303 13:48:36.505462 2706 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:48:36.505651 kubelet[2706]: I0303 13:48:36.505585 2706 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:48:36.506015 kubelet[2706]: I0303 13:48:36.505899 2706 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:48:36.506015 kubelet[2706]: I0303 13:48:36.505991 2706 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:48:36.507014 kubelet[2706]: I0303 13:48:36.506663 2706 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:48:36.507244 kubelet[2706]: I0303 13:48:36.507141 2706 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:48:36.507587 kubelet[2706]: I0303 13:48:36.507516 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:48:36.508302 kubelet[2706]: I0303 13:48:36.508246 2706 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:48:36.508638 kubelet[2706]: E0303 13:48:36.508562 2706 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:48:36.510661 kubelet[2706]: I0303 13:48:36.510623 2706 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:48:36.510661 kubelet[2706]: I0303 13:48:36.510660 2706 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:48:36.528041 kubelet[2706]: I0303 13:48:36.527935 2706 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:48:36.530227 kubelet[2706]: I0303 13:48:36.530137 2706 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:48:36.530227 kubelet[2706]: I0303 13:48:36.530156 2706 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:48:36.530227 kubelet[2706]: I0303 13:48:36.530177 2706 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:48:36.530227 kubelet[2706]: E0303 13:48:36.530217 2706 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:48:36.555744 kubelet[2706]: I0303 13:48:36.555570 2706 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:48:36.555744 kubelet[2706]: I0303 13:48:36.555590 2706 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:48:36.555744 kubelet[2706]: I0303 13:48:36.555614 2706 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.555821 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.555837 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.555863 2706 policy_none.go:49] "None policy: Start" Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.555876 2706 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.555893 2706 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.556023 2706 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 13:48:36.556501 kubelet[2706]: I0303 13:48:36.556036 2706 policy_none.go:47] "Start" Mar 3 13:48:36.562581 kubelet[2706]: E0303 13:48:36.562062 2706 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:48:36.562581 kubelet[2706]: I0303 13:48:36.562345 2706 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:48:36.562581 kubelet[2706]: I0303 13:48:36.562362 2706 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:48:36.562709 kubelet[2706]: I0303 13:48:36.562659 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:48:36.564262 kubelet[2706]: E0303 13:48:36.564208 2706 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:48:36.631615 kubelet[2706]: I0303 13:48:36.631549 2706 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:36.631615 kubelet[2706]: I0303 13:48:36.631573 2706 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:36.631615 kubelet[2706]: I0303 13:48:36.631610 2706 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:36.641018 kubelet[2706]: E0303 13:48:36.640807 2706 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:36.672818 kubelet[2706]: I0303 13:48:36.672735 2706 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:48:36.685656 kubelet[2706]: I0303 13:48:36.685585 2706 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 3 13:48:36.685814 kubelet[2706]: I0303 13:48:36.685739 2706 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:48:36.706988 kubelet[2706]: I0303 13:48:36.706871 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:36.706988 kubelet[2706]: I0303 13:48:36.706913 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:36.706988 kubelet[2706]: I0303 13:48:36.706936 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09aeec0f76c28ab963055694829e2edd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"09aeec0f76c28ab963055694829e2edd\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:36.706988 kubelet[2706]: I0303 13:48:36.706965 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09aeec0f76c28ab963055694829e2edd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"09aeec0f76c28ab963055694829e2edd\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:36.706988 kubelet[2706]: I0303 13:48:36.706989 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09aeec0f76c28ab963055694829e2edd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"09aeec0f76c28ab963055694829e2edd\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:36.707500 kubelet[2706]: I0303 13:48:36.707015 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:36.707500 kubelet[2706]: I0303 13:48:36.707052 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:36.707500 kubelet[2706]: I0303 13:48:36.707130 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:36.707500 kubelet[2706]: I0303 13:48:36.707158 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:48:36.802868 sudo[2746]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 3 13:48:36.803325 sudo[2746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 3 13:48:36.942333 kubelet[2706]: E0303 13:48:36.942162 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:36.942333 kubelet[2706]: E0303 13:48:36.942197 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:36.942981 kubelet[2706]: E0303 13:48:36.942162 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:37.182998 sudo[2746]: pam_unix(sudo:session): session closed for user root Mar 3 13:48:37.492664 kubelet[2706]: I0303 13:48:37.492586 2706 apiserver.go:52] "Watching apiserver" Mar 3 13:48:37.506259 kubelet[2706]: I0303 13:48:37.506225 2706 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:48:37.546405 kubelet[2706]: I0303 13:48:37.546181 2706 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:37.549902 kubelet[2706]: E0303 13:48:37.547927 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:37.550410 kubelet[2706]: I0303 13:48:37.550312 2706 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:37.557939 kubelet[2706]: E0303 13:48:37.557789 2706 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 3 13:48:37.558174 kubelet[2706]: E0303 13:48:37.557992 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:37.561628 kubelet[2706]: E0303 13:48:37.561524 2706 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 13:48:37.562185 kubelet[2706]: E0303 13:48:37.562132 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:37.583795 kubelet[2706]: I0303 13:48:37.583452 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.583410276 podStartE2EDuration="2.583410276s" podCreationTimestamp="2026-03-03 13:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:48:37.572509348 +0000 UTC m=+1.168239505" watchObservedRunningTime="2026-03-03 13:48:37.583410276 +0000 UTC m=+1.179140423" Mar 3 13:48:37.596140 kubelet[2706]: I0303 13:48:37.596037 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5960174870000001 podStartE2EDuration="1.596017487s" podCreationTimestamp="2026-03-03 13:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:48:37.583639475 +0000 UTC m=+1.179369632" watchObservedRunningTime="2026-03-03 13:48:37.596017487 +0000 UTC m=+1.191747623" Mar 3 13:48:37.596440 kubelet[2706]: I0303 13:48:37.596227 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.596216728 podStartE2EDuration="1.596216728s" podCreationTimestamp="2026-03-03 13:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:48:37.595747184 +0000 UTC m=+1.191477371" watchObservedRunningTime="2026-03-03 13:48:37.596216728 +0000 UTC m=+1.191946865" Mar 3 13:48:38.510229 sudo[1760]: pam_unix(sudo:session): session closed for user root Mar 3 13:48:38.512288 sshd[1759]: Connection closed by 10.0.0.1 port 40208 Mar 3 13:48:38.512864 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Mar 3 13:48:38.519195 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:40208.service: Deactivated successfully. Mar 3 13:48:38.522915 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:48:38.523604 systemd[1]: session-7.scope: Consumed 6.228s CPU time, 274.5M memory peak. Mar 3 13:48:38.525891 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:48:38.527636 systemd-logind[1535]: Removed session 7. Mar 3 13:48:38.548980 kubelet[2706]: E0303 13:48:38.548910 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:38.549721 kubelet[2706]: E0303 13:48:38.549233 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:39.550595 kubelet[2706]: E0303 13:48:39.550556 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:40.126354 kubelet[2706]: E0303 13:48:40.126311 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:41.523066 kubelet[2706]: E0303 13:48:41.522815 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:43.027516 kubelet[2706]: I0303 13:48:43.026566 2706 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 13:48:43.043863 containerd[1553]: time="2026-03-03T13:48:43.043544927Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 13:48:43.045030 kubelet[2706]: I0303 13:48:43.044523 2706 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 13:48:43.884161 systemd[1]: Created slice kubepods-besteffort-poddba2ea43_3bbe_4b2b_ae1f_57421316afc4.slice - libcontainer container kubepods-besteffort-poddba2ea43_3bbe_4b2b_ae1f_57421316afc4.slice. Mar 3 13:48:43.911315 systemd[1]: Created slice kubepods-burstable-pod1d3e108f_d470_4a51_a148_0de592291451.slice - libcontainer container kubepods-burstable-pod1d3e108f_d470_4a51_a148_0de592291451.slice. Mar 3 13:48:43.976917 kubelet[2706]: I0303 13:48:43.976738 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-xtables-lock\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.976917 kubelet[2706]: I0303 13:48:43.976894 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-etc-cni-netd\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977230 kubelet[2706]: I0303 13:48:43.976929 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d3e108f-d470-4a51-a148-0de592291451-cilium-config-path\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977230 kubelet[2706]: I0303 13:48:43.976958 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-net\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977230 kubelet[2706]: I0303 13:48:43.976979 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-hubble-tls\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977230 kubelet[2706]: I0303 13:48:43.977008 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tfm9\" (UniqueName: \"kubernetes.io/projected/dba2ea43-3bbe-4b2b-ae1f-57421316afc4-kube-api-access-2tfm9\") pod \"kube-proxy-qtshh\" (UID: \"dba2ea43-3bbe-4b2b-ae1f-57421316afc4\") " pod="kube-system/kube-proxy-qtshh" Mar 3 13:48:43.977230 kubelet[2706]: I0303 13:48:43.977034 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-bpf-maps\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977403 kubelet[2706]: I0303 13:48:43.977062 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-cgroup\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977403 kubelet[2706]: I0303 13:48:43.977390 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d3e108f-d470-4a51-a148-0de592291451-clustermesh-secrets\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977475 kubelet[2706]: I0303 13:48:43.977418 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq8tb\" (UniqueName: \"kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-kube-api-access-jq8tb\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977475 kubelet[2706]: I0303 13:48:43.977450 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dba2ea43-3bbe-4b2b-ae1f-57421316afc4-kube-proxy\") pod \"kube-proxy-qtshh\" (UID: \"dba2ea43-3bbe-4b2b-ae1f-57421316afc4\") " pod="kube-system/kube-proxy-qtshh" Mar 3 13:48:43.977475 kubelet[2706]: I0303 13:48:43.977471 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dba2ea43-3bbe-4b2b-ae1f-57421316afc4-xtables-lock\") pod \"kube-proxy-qtshh\" (UID: \"dba2ea43-3bbe-4b2b-ae1f-57421316afc4\") " pod="kube-system/kube-proxy-qtshh" Mar 3 13:48:43.977819 kubelet[2706]: I0303 13:48:43.977495 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-run\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977819 kubelet[2706]: I0303 13:48:43.977636 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-hostproc\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977819 kubelet[2706]: I0303 13:48:43.977664 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cni-path\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.977819 kubelet[2706]: I0303 13:48:43.977826 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-kernel\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:43.978461 kubelet[2706]: I0303 13:48:43.977860 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dba2ea43-3bbe-4b2b-ae1f-57421316afc4-lib-modules\") pod \"kube-proxy-qtshh\" (UID: \"dba2ea43-3bbe-4b2b-ae1f-57421316afc4\") " pod="kube-system/kube-proxy-qtshh" Mar 3 13:48:43.978461 kubelet[2706]: I0303 13:48:43.977887 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-lib-modules\") pod \"cilium-fpcpv\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " pod="kube-system/cilium-fpcpv" Mar 3 13:48:44.018511 systemd[1]: Created slice kubepods-besteffort-pod9a8f56fc_7b44_4a86_8e11_61df2076802e.slice - libcontainer container kubepods-besteffort-pod9a8f56fc_7b44_4a86_8e11_61df2076802e.slice. Mar 3 13:48:44.078589 kubelet[2706]: I0303 13:48:44.078481 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzn8l\" (UniqueName: \"kubernetes.io/projected/9a8f56fc-7b44-4a86-8e11-61df2076802e-kube-api-access-pzn8l\") pod \"cilium-operator-6f9c7c5859-qxbgt\" (UID: \"9a8f56fc-7b44-4a86-8e11-61df2076802e\") " pod="kube-system/cilium-operator-6f9c7c5859-qxbgt" Mar 3 13:48:44.079153 kubelet[2706]: I0303 13:48:44.078636 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8f56fc-7b44-4a86-8e11-61df2076802e-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-qxbgt\" (UID: \"9a8f56fc-7b44-4a86-8e11-61df2076802e\") " pod="kube-system/cilium-operator-6f9c7c5859-qxbgt" Mar 3 13:48:44.200911 kubelet[2706]: E0303 13:48:44.200489 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:44.205372 containerd[1553]: time="2026-03-03T13:48:44.204838453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qtshh,Uid:dba2ea43-3bbe-4b2b-ae1f-57421316afc4,Namespace:kube-system,Attempt:0,}" Mar 3 13:48:44.221484 kubelet[2706]: E0303 13:48:44.221341 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:44.222809 containerd[1553]: time="2026-03-03T13:48:44.222663574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fpcpv,Uid:1d3e108f-d470-4a51-a148-0de592291451,Namespace:kube-system,Attempt:0,}" Mar 3 13:48:44.256356 containerd[1553]: time="2026-03-03T13:48:44.256281763Z" level=info msg="connecting to shim 0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b" address="unix:///run/containerd/s/75a6467dc16c9dd3949c25f11ec10382e66f465d7801aa238374c2a8647114ed" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:48:44.267867 containerd[1553]: time="2026-03-03T13:48:44.267245232Z" level=info msg="connecting to shim 799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6" address="unix:///run/containerd/s/aad67003f63f2f238f0c1bca00d36d136b3a5988c5fe18904073a4a7cd3e2edf" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:48:44.309798 systemd[1]: Started cri-containerd-0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b.scope - libcontainer container 0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b. Mar 3 13:48:44.315795 systemd[1]: Started cri-containerd-799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6.scope - libcontainer container 799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6. Mar 3 13:48:44.328543 kubelet[2706]: E0303 13:48:44.328460 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:44.332360 containerd[1553]: time="2026-03-03T13:48:44.330809522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-qxbgt,Uid:9a8f56fc-7b44-4a86-8e11-61df2076802e,Namespace:kube-system,Attempt:0,}" Mar 3 13:48:44.371804 containerd[1553]: time="2026-03-03T13:48:44.371172628Z" level=info msg="connecting to shim 77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b" address="unix:///run/containerd/s/91a889a25ff977cf276963e81cb40219f5cb5ec3284619644647ee3d42c6d8e7" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:48:44.374623 containerd[1553]: time="2026-03-03T13:48:44.374518590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fpcpv,Uid:1d3e108f-d470-4a51-a148-0de592291451,Namespace:kube-system,Attempt:0,} returns sandbox id \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\"" Mar 3 13:48:44.376131 kubelet[2706]: E0303 13:48:44.376047 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:44.382704 containerd[1553]: time="2026-03-03T13:48:44.382613122Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 3 13:48:44.407933 containerd[1553]: time="2026-03-03T13:48:44.406402923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qtshh,Uid:dba2ea43-3bbe-4b2b-ae1f-57421316afc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b\"" Mar 3 13:48:44.409462 kubelet[2706]: E0303 13:48:44.409436 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:44.428479 systemd[1]: Started cri-containerd-77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b.scope - libcontainer container 77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b. Mar 3 13:48:44.434949 containerd[1553]: time="2026-03-03T13:48:44.434872803Z" level=info msg="CreateContainer within sandbox \"0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 13:48:44.461332 containerd[1553]: time="2026-03-03T13:48:44.459567641Z" level=info msg="Container 12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:48:44.484942 containerd[1553]: time="2026-03-03T13:48:44.484773764Z" level=info msg="CreateContainer within sandbox \"0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9\"" Mar 3 13:48:44.486642 containerd[1553]: time="2026-03-03T13:48:44.486559351Z" level=info msg="StartContainer for \"12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9\"" Mar 3 13:48:44.498110 containerd[1553]: time="2026-03-03T13:48:44.497812267Z" level=info msg="connecting to shim 12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9" address="unix:///run/containerd/s/75a6467dc16c9dd3949c25f11ec10382e66f465d7801aa238374c2a8647114ed" protocol=ttrpc version=3 Mar 3 13:48:44.545528 containerd[1553]: time="2026-03-03T13:48:44.545486829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-qxbgt,Uid:9a8f56fc-7b44-4a86-8e11-61df2076802e,Namespace:kube-system,Attempt:0,} returns sandbox id \"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\"" Mar 3 13:48:44.548028 kubelet[2706]: E0303 13:48:44.547604 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:44.549380 systemd[1]: Started cri-containerd-12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9.scope - libcontainer container 12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9. Mar 3 13:48:44.709254 containerd[1553]: time="2026-03-03T13:48:44.707357934Z" level=info msg="StartContainer for \"12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9\" returns successfully" Mar 3 13:48:45.546706 kubelet[2706]: E0303 13:48:45.546380 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:45.584897 kubelet[2706]: I0303 13:48:45.584513 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qtshh" podStartSLOduration=2.584423257 podStartE2EDuration="2.584423257s" podCreationTimestamp="2026-03-03 13:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:48:45.583345133 +0000 UTC m=+9.179075291" watchObservedRunningTime="2026-03-03 13:48:45.584423257 +0000 UTC m=+9.180153394" Mar 3 13:48:45.818923 kubelet[2706]: E0303 13:48:45.816799 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:46.557482 kubelet[2706]: E0303 13:48:46.557198 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:46.558828 kubelet[2706]: E0303 13:48:46.558720 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:50.030793 kubelet[2706]: E0303 13:48:50.030291 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:50.135437 kubelet[2706]: E0303 13:48:50.135231 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:50.590746 kubelet[2706]: E0303 13:48:50.590527 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:50.591621 kubelet[2706]: E0303 13:48:50.591574 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:48:54.369204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225305061.mount: Deactivated successfully. Mar 3 13:48:54.689863 update_engine[1539]: I20260303 13:48:54.689492 1539 update_attempter.cc:509] Updating boot flags... Mar 3 13:48:59.336322 containerd[1553]: time="2026-03-03T13:48:59.336221110Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:59.338267 containerd[1553]: time="2026-03-03T13:48:59.338031813Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 3 13:48:59.340064 containerd[1553]: time="2026-03-03T13:48:59.339963664Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:48:59.342559 containerd[1553]: time="2026-03-03T13:48:59.342435039Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.95957618s" Mar 3 13:48:59.342559 containerd[1553]: time="2026-03-03T13:48:59.342538331Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 3 13:48:59.344787 containerd[1553]: time="2026-03-03T13:48:59.344710193Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 3 13:48:59.364579 containerd[1553]: time="2026-03-03T13:48:59.363832328Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 13:48:59.383402 containerd[1553]: time="2026-03-03T13:48:59.383265450Z" level=info msg="Container 97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:48:59.393878 containerd[1553]: time="2026-03-03T13:48:59.393627921Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\"" Mar 3 13:48:59.394829 containerd[1553]: time="2026-03-03T13:48:59.394797473Z" level=info msg="StartContainer for \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\"" Mar 3 13:48:59.396728 containerd[1553]: time="2026-03-03T13:48:59.396636551Z" level=info msg="connecting to shim 97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21" address="unix:///run/containerd/s/aad67003f63f2f238f0c1bca00d36d136b3a5988c5fe18904073a4a7cd3e2edf" protocol=ttrpc version=3 Mar 3 13:48:59.475402 systemd[1]: Started cri-containerd-97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21.scope - libcontainer container 97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21. Mar 3 13:48:59.872448 containerd[1553]: time="2026-03-03T13:48:59.872348941Z" level=info msg="StartContainer for \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\" returns successfully" Mar 3 13:48:59.884536 systemd[1]: cri-containerd-97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21.scope: Deactivated successfully. Mar 3 13:48:59.893803 containerd[1553]: time="2026-03-03T13:48:59.893572529Z" level=info msg="received container exit event container_id:\"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\" id:\"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\" pid:3151 exited_at:{seconds:1772545739 nanos:892044042}" Mar 3 13:48:59.951358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21-rootfs.mount: Deactivated successfully. Mar 3 13:49:00.880614 kubelet[2706]: E0303 13:49:00.880510 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:00.889595 containerd[1553]: time="2026-03-03T13:49:00.889501929Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 13:49:00.911865 containerd[1553]: time="2026-03-03T13:49:00.911579034Z" level=info msg="Container 3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:00.919198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980568784.mount: Deactivated successfully. Mar 3 13:49:00.926019 containerd[1553]: time="2026-03-03T13:49:00.925910614Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\"" Mar 3 13:49:00.927942 containerd[1553]: time="2026-03-03T13:49:00.927739482Z" level=info msg="StartContainer for \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\"" Mar 3 13:49:00.930543 containerd[1553]: time="2026-03-03T13:49:00.930437079Z" level=info msg="connecting to shim 3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda" address="unix:///run/containerd/s/aad67003f63f2f238f0c1bca00d36d136b3a5988c5fe18904073a4a7cd3e2edf" protocol=ttrpc version=3 Mar 3 13:49:00.967330 systemd[1]: Started cri-containerd-3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda.scope - libcontainer container 3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda. Mar 3 13:49:01.030249 containerd[1553]: time="2026-03-03T13:49:01.030190213Z" level=info msg="StartContainer for \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\" returns successfully" Mar 3 13:49:01.065571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:49:01.065962 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:49:01.069379 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:49:01.071943 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:49:01.074909 containerd[1553]: time="2026-03-03T13:49:01.074770250Z" level=info msg="received container exit event container_id:\"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\" id:\"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\" pid:3199 exited_at:{seconds:1772545741 nanos:74177950}" Mar 3 13:49:01.075337 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:49:01.076176 systemd[1]: cri-containerd-3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda.scope: Deactivated successfully. Mar 3 13:49:01.107702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:49:01.888640 kubelet[2706]: E0303 13:49:01.888381 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:01.901955 containerd[1553]: time="2026-03-03T13:49:01.901857828Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 13:49:01.911213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda-rootfs.mount: Deactivated successfully. Mar 3 13:49:01.929029 containerd[1553]: time="2026-03-03T13:49:01.928407798Z" level=info msg="Container 5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:01.953964 containerd[1553]: time="2026-03-03T13:49:01.953861330Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\"" Mar 3 13:49:01.955577 containerd[1553]: time="2026-03-03T13:49:01.955482372Z" level=info msg="StartContainer for \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\"" Mar 3 13:49:01.958275 containerd[1553]: time="2026-03-03T13:49:01.958199276Z" level=info msg="connecting to shim 5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0" address="unix:///run/containerd/s/aad67003f63f2f238f0c1bca00d36d136b3a5988c5fe18904073a4a7cd3e2edf" protocol=ttrpc version=3 Mar 3 13:49:02.012786 systemd[1]: Started cri-containerd-5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0.scope - libcontainer container 5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0. Mar 3 13:49:02.147909 systemd[1]: cri-containerd-5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0.scope: Deactivated successfully. Mar 3 13:49:02.151923 containerd[1553]: time="2026-03-03T13:49:02.151826127Z" level=info msg="StartContainer for \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\" returns successfully" Mar 3 13:49:02.152752 containerd[1553]: time="2026-03-03T13:49:02.152457216Z" level=info msg="received container exit event container_id:\"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\" id:\"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\" pid:3247 exited_at:{seconds:1772545742 nanos:150506717}" Mar 3 13:49:02.914880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0-rootfs.mount: Deactivated successfully. Mar 3 13:49:02.918757 kubelet[2706]: E0303 13:49:02.918714 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:03.039574 containerd[1553]: time="2026-03-03T13:49:03.038770427Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 13:49:03.394549 containerd[1553]: time="2026-03-03T13:49:03.381963921Z" level=info msg="Container b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:04.346453 containerd[1553]: time="2026-03-03T13:49:04.346281207Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\"" Mar 3 13:49:04.467889 containerd[1553]: time="2026-03-03T13:49:04.467470214Z" level=info msg="StartContainer for \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\"" Mar 3 13:49:04.575150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927257533.mount: Deactivated successfully. Mar 3 13:49:04.657768 containerd[1553]: time="2026-03-03T13:49:04.648311367Z" level=info msg="connecting to shim b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b" address="unix:///run/containerd/s/aad67003f63f2f238f0c1bca00d36d136b3a5988c5fe18904073a4a7cd3e2edf" protocol=ttrpc version=3 Mar 3 13:49:04.878637 systemd[1]: Started cri-containerd-b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b.scope - libcontainer container b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b. Mar 3 13:49:05.347423 systemd[1]: cri-containerd-b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b.scope: Deactivated successfully. Mar 3 13:49:05.538866 containerd[1553]: time="2026-03-03T13:49:05.538560530Z" level=info msg="received container exit event container_id:\"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\" id:\"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\" pid:3290 exited_at:{seconds:1772545745 nanos:393859483}" Mar 3 13:49:05.740862 containerd[1553]: time="2026-03-03T13:49:05.740747582Z" level=info msg="StartContainer for \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\" returns successfully" Mar 3 13:49:06.435172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b-rootfs.mount: Deactivated successfully. Mar 3 13:49:06.834592 kubelet[2706]: E0303 13:49:06.834371 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:06.846296 containerd[1553]: time="2026-03-03T13:49:06.845863993Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 13:49:06.881350 containerd[1553]: time="2026-03-03T13:49:06.880199266Z" level=info msg="Container d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:06.915649 containerd[1553]: time="2026-03-03T13:49:06.907786068Z" level=info msg="CreateContainer within sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\"" Mar 3 13:49:06.920989 containerd[1553]: time="2026-03-03T13:49:06.920909588Z" level=info msg="StartContainer for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\"" Mar 3 13:49:06.939051 containerd[1553]: time="2026-03-03T13:49:06.938924370Z" level=info msg="connecting to shim d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed" address="unix:///run/containerd/s/aad67003f63f2f238f0c1bca00d36d136b3a5988c5fe18904073a4a7cd3e2edf" protocol=ttrpc version=3 Mar 3 13:49:06.990516 systemd[1]: Started cri-containerd-d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed.scope - libcontainer container d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed. Mar 3 13:49:07.198534 containerd[1553]: time="2026-03-03T13:49:07.198484883Z" level=info msg="StartContainer for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" returns successfully" Mar 3 13:49:07.419166 kubelet[2706]: I0303 13:49:07.417227 2706 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 3 13:49:07.549062 systemd[1]: Created slice kubepods-burstable-pod1aef7d29_4eb5_441f_af95_521a1030466f.slice - libcontainer container kubepods-burstable-pod1aef7d29_4eb5_441f_af95_521a1030466f.slice. Mar 3 13:49:07.572268 systemd[1]: Created slice kubepods-burstable-pod95ae6038_e082_4309_b5d7_92f8f8d2078d.slice - libcontainer container kubepods-burstable-pod95ae6038_e082_4309_b5d7_92f8f8d2078d.slice. Mar 3 13:49:07.608201 kubelet[2706]: I0303 13:49:07.608034 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aef7d29-4eb5-441f-af95-521a1030466f-config-volume\") pod \"coredns-66bc5c9577-sssnw\" (UID: \"1aef7d29-4eb5-441f-af95-521a1030466f\") " pod="kube-system/coredns-66bc5c9577-sssnw" Mar 3 13:49:07.711776 kubelet[2706]: I0303 13:49:07.711539 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbcd\" (UniqueName: \"kubernetes.io/projected/95ae6038-e082-4309-b5d7-92f8f8d2078d-kube-api-access-dwbcd\") pod \"coredns-66bc5c9577-wcmn2\" (UID: \"95ae6038-e082-4309-b5d7-92f8f8d2078d\") " pod="kube-system/coredns-66bc5c9577-wcmn2" Mar 3 13:49:07.713221 kubelet[2706]: I0303 13:49:07.713175 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95ae6038-e082-4309-b5d7-92f8f8d2078d-config-volume\") pod \"coredns-66bc5c9577-wcmn2\" (UID: \"95ae6038-e082-4309-b5d7-92f8f8d2078d\") " pod="kube-system/coredns-66bc5c9577-wcmn2" Mar 3 13:49:07.713799 kubelet[2706]: I0303 13:49:07.713773 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2bv\" (UniqueName: \"kubernetes.io/projected/1aef7d29-4eb5-441f-af95-521a1030466f-kube-api-access-pr2bv\") pod \"coredns-66bc5c9577-sssnw\" (UID: \"1aef7d29-4eb5-441f-af95-521a1030466f\") " pod="kube-system/coredns-66bc5c9577-sssnw" Mar 3 13:49:07.870438 kubelet[2706]: E0303 13:49:07.870287 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:07.871602 containerd[1553]: time="2026-03-03T13:49:07.871479272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sssnw,Uid:1aef7d29-4eb5-441f-af95-521a1030466f,Namespace:kube-system,Attempt:0,}" Mar 3 13:49:07.881175 kubelet[2706]: E0303 13:49:07.881036 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:07.888572 kubelet[2706]: E0303 13:49:07.888395 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:07.889169 containerd[1553]: time="2026-03-03T13:49:07.888931627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wcmn2,Uid:95ae6038-e082-4309-b5d7-92f8f8d2078d,Namespace:kube-system,Attempt:0,}" Mar 3 13:49:08.210659 containerd[1553]: time="2026-03-03T13:49:08.210451571Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:49:08.212282 containerd[1553]: time="2026-03-03T13:49:08.212193215Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 3 13:49:08.214043 containerd[1553]: time="2026-03-03T13:49:08.213946094Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:49:08.216065 containerd[1553]: time="2026-03-03T13:49:08.215921817Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 8.871153815s" Mar 3 13:49:08.216065 containerd[1553]: time="2026-03-03T13:49:08.216002637Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 3 13:49:08.231156 containerd[1553]: time="2026-03-03T13:49:08.230181303Z" level=info msg="CreateContainer within sandbox \"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 3 13:49:08.251921 containerd[1553]: time="2026-03-03T13:49:08.251526877Z" level=info msg="Container 914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:08.278394 containerd[1553]: time="2026-03-03T13:49:08.278268598Z" level=info msg="CreateContainer within sandbox \"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\"" Mar 3 13:49:08.279756 containerd[1553]: time="2026-03-03T13:49:08.279642690Z" level=info msg="StartContainer for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\"" Mar 3 13:49:08.281810 containerd[1553]: time="2026-03-03T13:49:08.281454089Z" level=info msg="connecting to shim 914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68" address="unix:///run/containerd/s/91a889a25ff977cf276963e81cb40219f5cb5ec3284619644647ee3d42c6d8e7" protocol=ttrpc version=3 Mar 3 13:49:08.335501 systemd[1]: Started cri-containerd-914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68.scope - libcontainer container 914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68. Mar 3 13:49:08.474779 containerd[1553]: time="2026-03-03T13:49:08.473432955Z" level=info msg="StartContainer for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" returns successfully" Mar 3 13:49:08.893369 kubelet[2706]: E0303 13:49:08.893025 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:08.893369 kubelet[2706]: E0303 13:49:08.893032 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:08.950474 kubelet[2706]: I0303 13:49:08.949444 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fpcpv" podStartSLOduration=10.985974541000001 podStartE2EDuration="25.949422206s" podCreationTimestamp="2026-03-03 13:48:43 +0000 UTC" firstStartedPulling="2026-03-03 13:48:44.380592539 +0000 UTC m=+7.976322686" lastFinishedPulling="2026-03-03 13:48:59.344040204 +0000 UTC m=+22.939770351" observedRunningTime="2026-03-03 13:49:08.005467851 +0000 UTC m=+31.601198019" watchObservedRunningTime="2026-03-03 13:49:08.949422206 +0000 UTC m=+32.545152363" Mar 3 13:49:09.901505 kubelet[2706]: E0303 13:49:09.899989 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:09.901505 kubelet[2706]: E0303 13:49:09.900576 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:12.441874 systemd-networkd[1461]: cilium_host: Link UP Mar 3 13:49:12.442395 systemd-networkd[1461]: cilium_net: Link UP Mar 3 13:49:12.444001 systemd-networkd[1461]: cilium_net: Gained carrier Mar 3 13:49:12.444405 systemd-networkd[1461]: cilium_host: Gained carrier Mar 3 13:49:12.639058 systemd-networkd[1461]: cilium_vxlan: Link UP Mar 3 13:49:12.639218 systemd-networkd[1461]: cilium_vxlan: Gained carrier Mar 3 13:49:12.800557 systemd-networkd[1461]: cilium_host: Gained IPv6LL Mar 3 13:49:13.021287 kernel: NET: Registered PF_ALG protocol family Mar 3 13:49:13.216650 systemd-networkd[1461]: cilium_net: Gained IPv6LL Mar 3 13:49:14.149263 systemd-networkd[1461]: lxc_health: Link UP Mar 3 13:49:14.152430 systemd-networkd[1461]: lxc_health: Gained carrier Mar 3 13:49:14.224622 kubelet[2706]: E0303 13:49:14.224521 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:14.266136 kubelet[2706]: I0303 13:49:14.265844 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-qxbgt" podStartSLOduration=7.596276312 podStartE2EDuration="31.265620754s" podCreationTimestamp="2026-03-03 13:48:43 +0000 UTC" firstStartedPulling="2026-03-03 13:48:44.549653372 +0000 UTC m=+8.145383519" lastFinishedPulling="2026-03-03 13:49:08.218997814 +0000 UTC m=+31.814727961" observedRunningTime="2026-03-03 13:49:08.951200642 +0000 UTC m=+32.546930819" watchObservedRunningTime="2026-03-03 13:49:14.265620754 +0000 UTC m=+37.861350901" Mar 3 13:49:14.575302 kernel: eth0: renamed from tmpa910d Mar 3 13:49:14.578421 systemd-networkd[1461]: lxc9d8e19201247: Link UP Mar 3 13:49:14.580068 systemd-networkd[1461]: lxc1a7d04a0b453: Link UP Mar 3 13:49:14.595282 kernel: eth0: renamed from tmp1a854 Mar 3 13:49:14.603346 systemd-networkd[1461]: lxc9d8e19201247: Gained carrier Mar 3 13:49:14.608026 systemd-networkd[1461]: lxc1a7d04a0b453: Gained carrier Mar 3 13:49:14.689224 systemd-networkd[1461]: cilium_vxlan: Gained IPv6LL Mar 3 13:49:14.955898 kubelet[2706]: E0303 13:49:14.955788 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:15.585521 systemd-networkd[1461]: lxc_health: Gained IPv6LL Mar 3 13:49:15.956462 kubelet[2706]: E0303 13:49:15.956336 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:16.160480 systemd-networkd[1461]: lxc9d8e19201247: Gained IPv6LL Mar 3 13:49:16.224771 systemd-networkd[1461]: lxc1a7d04a0b453: Gained IPv6LL Mar 3 13:49:19.118152 containerd[1553]: time="2026-03-03T13:49:19.117281539Z" level=info msg="connecting to shim a910d4f8bfc57501351c8bec5399996ea01584d68a54d19dc705e9139bdf499a" address="unix:///run/containerd/s/2bb7aabe009ee0c67ce77e849cdf30b707531fae643197086ba9713efb9ba756" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:49:19.123510 containerd[1553]: time="2026-03-03T13:49:19.123440151Z" level=info msg="connecting to shim 1a854c1dd68f8c3fcce5ad09eb3991668a2c8922554d1dd526945c32eea98154" address="unix:///run/containerd/s/23065f625b1ec2e1027f6839291886dd904236f4c9e8de788aabe380af9132c4" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:49:19.166656 systemd[1]: Started cri-containerd-1a854c1dd68f8c3fcce5ad09eb3991668a2c8922554d1dd526945c32eea98154.scope - libcontainer container 1a854c1dd68f8c3fcce5ad09eb3991668a2c8922554d1dd526945c32eea98154. Mar 3 13:49:19.187302 systemd[1]: Started cri-containerd-a910d4f8bfc57501351c8bec5399996ea01584d68a54d19dc705e9139bdf499a.scope - libcontainer container a910d4f8bfc57501351c8bec5399996ea01584d68a54d19dc705e9139bdf499a. Mar 3 13:49:19.193302 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:49:19.213379 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:49:19.293302 containerd[1553]: time="2026-03-03T13:49:19.293004845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sssnw,Uid:1aef7d29-4eb5-441f-af95-521a1030466f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a910d4f8bfc57501351c8bec5399996ea01584d68a54d19dc705e9139bdf499a\"" Mar 3 13:49:19.293560 containerd[1553]: time="2026-03-03T13:49:19.293470008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wcmn2,Uid:95ae6038-e082-4309-b5d7-92f8f8d2078d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a854c1dd68f8c3fcce5ad09eb3991668a2c8922554d1dd526945c32eea98154\"" Mar 3 13:49:19.294821 kubelet[2706]: E0303 13:49:19.294676 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:19.295271 kubelet[2706]: E0303 13:49:19.294971 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:19.304355 containerd[1553]: time="2026-03-03T13:49:19.304159670Z" level=info msg="CreateContainer within sandbox \"1a854c1dd68f8c3fcce5ad09eb3991668a2c8922554d1dd526945c32eea98154\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:49:19.309503 containerd[1553]: time="2026-03-03T13:49:19.309227907Z" level=info msg="CreateContainer within sandbox \"a910d4f8bfc57501351c8bec5399996ea01584d68a54d19dc705e9139bdf499a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:49:19.338684 containerd[1553]: time="2026-03-03T13:49:19.338637657Z" level=info msg="Container af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:19.341412 containerd[1553]: time="2026-03-03T13:49:19.341339527Z" level=info msg="Container 6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:49:19.348484 containerd[1553]: time="2026-03-03T13:49:19.348381181Z" level=info msg="CreateContainer within sandbox \"1a854c1dd68f8c3fcce5ad09eb3991668a2c8922554d1dd526945c32eea98154\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9\"" Mar 3 13:49:19.350340 containerd[1553]: time="2026-03-03T13:49:19.349874560Z" level=info msg="StartContainer for \"af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9\"" Mar 3 13:49:19.352568 containerd[1553]: time="2026-03-03T13:49:19.352378121Z" level=info msg="connecting to shim af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9" address="unix:///run/containerd/s/23065f625b1ec2e1027f6839291886dd904236f4c9e8de788aabe380af9132c4" protocol=ttrpc version=3 Mar 3 13:49:19.362594 containerd[1553]: time="2026-03-03T13:49:19.362549845Z" level=info msg="CreateContainer within sandbox \"a910d4f8bfc57501351c8bec5399996ea01584d68a54d19dc705e9139bdf499a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455\"" Mar 3 13:49:19.364010 containerd[1553]: time="2026-03-03T13:49:19.363914574Z" level=info msg="StartContainer for \"6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455\"" Mar 3 13:49:19.365429 containerd[1553]: time="2026-03-03T13:49:19.365310448Z" level=info msg="connecting to shim 6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455" address="unix:///run/containerd/s/2bb7aabe009ee0c67ce77e849cdf30b707531fae643197086ba9713efb9ba756" protocol=ttrpc version=3 Mar 3 13:49:19.393353 systemd[1]: Started cri-containerd-af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9.scope - libcontainer container af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9. Mar 3 13:49:19.402265 systemd[1]: Started cri-containerd-6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455.scope - libcontainer container 6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455. Mar 3 13:49:19.476247 containerd[1553]: time="2026-03-03T13:49:19.475536525Z" level=info msg="StartContainer for \"6ad01440e8fdb17bfc0799328412d2fa6257d7c129b40960bc58e874bc49b455\" returns successfully" Mar 3 13:49:19.495625 containerd[1553]: time="2026-03-03T13:49:19.495421386Z" level=info msg="StartContainer for \"af17f38c99c05eac642e4e7c44c0e9d7bb9990176c39c1302086f4249b03aea9\" returns successfully" Mar 3 13:49:19.986221 kubelet[2706]: E0303 13:49:19.986169 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:19.988200 kubelet[2706]: E0303 13:49:19.986764 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:20.022874 kubelet[2706]: I0303 13:49:20.022254 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sssnw" podStartSLOduration=37.022235793 podStartE2EDuration="37.022235793s" podCreationTimestamp="2026-03-03 13:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:49:20.021648477 +0000 UTC m=+43.617378624" watchObservedRunningTime="2026-03-03 13:49:20.022235793 +0000 UTC m=+43.617965940" Mar 3 13:49:20.066023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952829317.mount: Deactivated successfully. Mar 3 13:49:20.988876 kubelet[2706]: E0303 13:49:20.988595 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:21.992049 kubelet[2706]: E0303 13:49:21.991946 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:29.988381 kubelet[2706]: E0303 13:49:29.987962 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:30.018453 kubelet[2706]: E0303 13:49:30.018280 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:30.028417 kubelet[2706]: I0303 13:49:30.028221 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wcmn2" podStartSLOduration=47.028199354 podStartE2EDuration="47.028199354s" podCreationTimestamp="2026-03-03 13:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:49:20.090420756 +0000 UTC m=+43.686150903" watchObservedRunningTime="2026-03-03 13:49:30.028199354 +0000 UTC m=+53.623929522" Mar 3 13:49:52.531378 kubelet[2706]: E0303 13:49:52.531284 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:49:56.836391 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:58478.service - OpenSSH per-connection server daemon (10.0.0.1:58478). Mar 3 13:49:56.922902 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 58478 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:49:56.924765 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:49:56.933214 systemd-logind[1535]: New session 8 of user core. Mar 3 13:49:56.945346 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:49:57.065330 sshd[4073]: Connection closed by 10.0.0.1 port 58478 Mar 3 13:49:57.065767 sshd-session[4070]: pam_unix(sshd:session): session closed for user core Mar 3 13:49:57.070278 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:58478.service: Deactivated successfully. Mar 3 13:49:57.072544 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:49:57.074806 systemd-logind[1535]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:49:57.077006 systemd-logind[1535]: Removed session 8. Mar 3 13:50:02.084463 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:50140.service - OpenSSH per-connection server daemon (10.0.0.1:50140). Mar 3 13:50:02.154433 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 50140 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:02.156973 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:02.165287 systemd-logind[1535]: New session 9 of user core. Mar 3 13:50:02.179311 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:50:02.291028 sshd[4092]: Connection closed by 10.0.0.1 port 50140 Mar 3 13:50:02.291499 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:02.297339 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:50140.service: Deactivated successfully. Mar 3 13:50:02.300576 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 13:50:02.301949 systemd-logind[1535]: Session 9 logged out. Waiting for processes to exit. Mar 3 13:50:02.304616 systemd-logind[1535]: Removed session 9. Mar 3 13:50:05.536826 kubelet[2706]: E0303 13:50:05.533444 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:07.518919 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:50148.service - OpenSSH per-connection server daemon (10.0.0.1:50148). Mar 3 13:50:07.859468 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 50148 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:07.929459 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:08.046819 systemd-logind[1535]: New session 10 of user core. Mar 3 13:50:08.063576 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 13:50:09.647048 sshd[4109]: Connection closed by 10.0.0.1 port 50148 Mar 3 13:50:09.649745 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:09.667195 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:50148.service: Deactivated successfully. Mar 3 13:50:09.679589 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 13:50:09.729997 systemd-logind[1535]: Session 10 logged out. Waiting for processes to exit. Mar 3 13:50:09.744368 systemd-logind[1535]: Removed session 10. Mar 3 13:50:10.543263 kubelet[2706]: E0303 13:50:10.542228 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:14.554992 kubelet[2706]: E0303 13:50:14.554833 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:14.669799 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:59998.service - OpenSSH per-connection server daemon (10.0.0.1:59998). Mar 3 13:50:14.785763 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 59998 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:14.788720 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:14.820968 systemd-logind[1535]: New session 11 of user core. Mar 3 13:50:14.833505 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 13:50:15.090487 sshd[4128]: Connection closed by 10.0.0.1 port 59998 Mar 3 13:50:15.091453 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:15.107698 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:59998.service: Deactivated successfully. Mar 3 13:50:15.112855 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 13:50:15.117736 systemd-logind[1535]: Session 11 logged out. Waiting for processes to exit. Mar 3 13:50:15.125015 systemd-logind[1535]: Removed session 11. Mar 3 13:50:18.540795 kubelet[2706]: E0303 13:50:18.540179 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:20.126256 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:53974.service - OpenSSH per-connection server daemon (10.0.0.1:53974). Mar 3 13:50:20.240655 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 53974 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:20.245530 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:20.263526 systemd-logind[1535]: New session 12 of user core. Mar 3 13:50:20.277445 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 13:50:20.536811 sshd[4149]: Connection closed by 10.0.0.1 port 53974 Mar 3 13:50:20.538039 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:20.549784 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:53974.service: Deactivated successfully. Mar 3 13:50:20.553403 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 13:50:20.557832 systemd-logind[1535]: Session 12 logged out. Waiting for processes to exit. Mar 3 13:50:20.560716 systemd-logind[1535]: Removed session 12. Mar 3 13:50:25.561699 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:53990.service - OpenSSH per-connection server daemon (10.0.0.1:53990). Mar 3 13:50:25.663257 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 53990 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:25.666320 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:25.681237 systemd-logind[1535]: New session 13 of user core. Mar 3 13:50:25.720326 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 13:50:25.933795 sshd[4166]: Connection closed by 10.0.0.1 port 53990 Mar 3 13:50:25.934047 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:25.944282 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:53990.service: Deactivated successfully. Mar 3 13:50:25.949050 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 13:50:25.951969 systemd-logind[1535]: Session 13 logged out. Waiting for processes to exit. Mar 3 13:50:25.955288 systemd-logind[1535]: Removed session 13. Mar 3 13:50:30.961809 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:45270.service - OpenSSH per-connection server daemon (10.0.0.1:45270). Mar 3 13:50:31.051795 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 45270 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:31.054941 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:31.067903 systemd-logind[1535]: New session 14 of user core. Mar 3 13:50:31.078744 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 13:50:31.256561 sshd[4183]: Connection closed by 10.0.0.1 port 45270 Mar 3 13:50:31.256922 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:31.264808 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:45270.service: Deactivated successfully. Mar 3 13:50:31.268825 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 13:50:31.271055 systemd-logind[1535]: Session 14 logged out. Waiting for processes to exit. Mar 3 13:50:31.274658 systemd-logind[1535]: Removed session 14. Mar 3 13:50:36.287385 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:45278.service - OpenSSH per-connection server daemon (10.0.0.1:45278). Mar 3 13:50:36.399876 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 45278 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:36.403752 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:36.440709 systemd-logind[1535]: New session 15 of user core. Mar 3 13:50:36.460513 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 13:50:36.958458 sshd[4202]: Connection closed by 10.0.0.1 port 45278 Mar 3 13:50:36.958535 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:36.977709 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:45278.service: Deactivated successfully. Mar 3 13:50:36.982267 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 13:50:36.989183 systemd-logind[1535]: Session 15 logged out. Waiting for processes to exit. Mar 3 13:50:36.998700 systemd-logind[1535]: Removed session 15. Mar 3 13:50:41.543443 kubelet[2706]: E0303 13:50:41.541818 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:42.022182 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:51324.service - OpenSSH per-connection server daemon (10.0.0.1:51324). Mar 3 13:50:42.218567 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 51324 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:42.223596 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:42.239894 systemd-logind[1535]: New session 16 of user core. Mar 3 13:50:42.266258 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 13:50:42.742191 sshd[4221]: Connection closed by 10.0.0.1 port 51324 Mar 3 13:50:42.741457 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:42.757547 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:51324.service: Deactivated successfully. Mar 3 13:50:42.769398 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 13:50:42.834264 systemd-logind[1535]: Session 16 logged out. Waiting for processes to exit. Mar 3 13:50:42.840802 systemd-logind[1535]: Removed session 16. Mar 3 13:50:44.541856 kubelet[2706]: E0303 13:50:44.537481 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:47.780851 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:51344.service - OpenSSH per-connection server daemon (10.0.0.1:51344). Mar 3 13:50:48.018412 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 51344 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:48.023443 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:48.063251 systemd-logind[1535]: New session 17 of user core. Mar 3 13:50:48.098349 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 13:50:48.676921 sshd[4242]: Connection closed by 10.0.0.1 port 51344 Mar 3 13:50:48.679252 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:48.696965 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:51344.service: Deactivated successfully. Mar 3 13:50:48.708299 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 13:50:48.732709 systemd-logind[1535]: Session 17 logged out. Waiting for processes to exit. Mar 3 13:50:48.740226 systemd-logind[1535]: Removed session 17. Mar 3 13:50:51.538976 kubelet[2706]: E0303 13:50:51.536200 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:50:53.746493 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:37468.service - OpenSSH per-connection server daemon (10.0.0.1:37468). Mar 3 13:50:53.982985 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 37468 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:53.989908 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:54.025423 systemd-logind[1535]: New session 18 of user core. Mar 3 13:50:54.053416 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 13:50:54.590988 sshd[4259]: Connection closed by 10.0.0.1 port 37468 Mar 3 13:50:54.588315 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:54.613784 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:37468.service: Deactivated successfully. Mar 3 13:50:54.623475 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 13:50:54.633196 systemd-logind[1535]: Session 18 logged out. Waiting for processes to exit. Mar 3 13:50:54.650900 systemd-logind[1535]: Removed session 18. Mar 3 13:50:59.616763 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:37482.service - OpenSSH per-connection server daemon (10.0.0.1:37482). Mar 3 13:50:59.771877 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 37482 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:59.779521 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:59.795491 systemd-logind[1535]: New session 19 of user core. Mar 3 13:50:59.823884 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 13:51:00.256360 sshd[4276]: Connection closed by 10.0.0.1 port 37482 Mar 3 13:51:00.253431 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:00.277575 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:37482.service: Deactivated successfully. Mar 3 13:51:00.291768 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 13:51:00.298998 systemd-logind[1535]: Session 19 logged out. Waiting for processes to exit. Mar 3 13:51:00.317918 systemd-logind[1535]: Removed session 19. Mar 3 13:51:05.352347 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:60264.service - OpenSSH per-connection server daemon (10.0.0.1:60264). Mar 3 13:51:05.642346 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 60264 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:05.645935 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:05.694351 systemd-logind[1535]: New session 20 of user core. Mar 3 13:51:05.737322 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 13:51:06.295060 sshd[4295]: Connection closed by 10.0.0.1 port 60264 Mar 3 13:51:06.297311 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:06.333790 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:60264.service: Deactivated successfully. Mar 3 13:51:06.348450 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 13:51:06.367339 systemd-logind[1535]: Session 20 logged out. Waiting for processes to exit. Mar 3 13:51:06.371754 systemd-logind[1535]: Removed session 20. Mar 3 13:51:06.544276 kubelet[2706]: E0303 13:51:06.543759 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:51:11.326199 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Mar 3 13:51:11.568051 sshd[4310]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:11.570638 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:11.600319 systemd-logind[1535]: New session 21 of user core. Mar 3 13:51:11.625296 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 13:51:12.162965 sshd[4313]: Connection closed by 10.0.0.1 port 47972 Mar 3 13:51:12.161885 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:12.196603 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:47972.service: Deactivated successfully. Mar 3 13:51:12.232938 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 13:51:12.247063 systemd-logind[1535]: Session 21 logged out. Waiting for processes to exit. Mar 3 13:51:12.263446 systemd-logind[1535]: Removed session 21. Mar 3 13:51:16.541863 kubelet[2706]: E0303 13:51:16.539490 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:51:17.213513 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:47990.service - OpenSSH per-connection server daemon (10.0.0.1:47990). Mar 3 13:51:17.481598 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 47990 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:17.496359 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:17.535048 systemd-logind[1535]: New session 22 of user core. Mar 3 13:51:17.563888 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 13:51:18.083286 sshd[4332]: Connection closed by 10.0.0.1 port 47990 Mar 3 13:51:18.084482 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:18.097641 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:47990.service: Deactivated successfully. Mar 3 13:51:18.124651 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 13:51:18.141405 systemd-logind[1535]: Session 22 logged out. Waiting for processes to exit. Mar 3 13:51:18.147416 systemd-logind[1535]: Removed session 22. Mar 3 13:51:19.537023 kubelet[2706]: E0303 13:51:19.533029 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:51:20.532225 kubelet[2706]: E0303 13:51:20.530980 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:51:23.163567 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:54788.service - OpenSSH per-connection server daemon (10.0.0.1:54788). Mar 3 13:51:23.324922 sshd[4347]: Accepted publickey for core from 10.0.0.1 port 54788 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:23.330592 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:23.377971 systemd-logind[1535]: New session 23 of user core. Mar 3 13:51:23.384644 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 13:51:23.818339 sshd[4350]: Connection closed by 10.0.0.1 port 54788 Mar 3 13:51:23.822812 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:23.850451 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:54788.service: Deactivated successfully. Mar 3 13:51:23.865463 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 13:51:23.877280 systemd-logind[1535]: Session 23 logged out. Waiting for processes to exit. Mar 3 13:51:23.895917 systemd-logind[1535]: Removed session 23. Mar 3 13:51:28.945507 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:54814.service - OpenSSH per-connection server daemon (10.0.0.1:54814). Mar 3 13:51:29.158249 sshd[4365]: Accepted publickey for core from 10.0.0.1 port 54814 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:29.167597 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:29.240418 systemd-logind[1535]: New session 24 of user core. Mar 3 13:51:29.264340 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 13:51:29.617950 sshd[4368]: Connection closed by 10.0.0.1 port 54814 Mar 3 13:51:29.619866 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:29.636978 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:54814.service: Deactivated successfully. Mar 3 13:51:29.644652 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 13:51:29.651321 systemd-logind[1535]: Session 24 logged out. Waiting for processes to exit. Mar 3 13:51:29.658827 systemd-logind[1535]: Removed session 24. Mar 3 13:51:34.686649 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:48220.service - OpenSSH per-connection server daemon (10.0.0.1:48220). Mar 3 13:51:35.113471 sshd[4382]: Accepted publickey for core from 10.0.0.1 port 48220 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:35.150399 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:35.204825 systemd-logind[1535]: New session 25 of user core. Mar 3 13:51:35.253642 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 13:51:36.015675 sshd[4385]: Connection closed by 10.0.0.1 port 48220 Mar 3 13:51:36.019457 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:36.052651 systemd-logind[1535]: Session 25 logged out. Waiting for processes to exit. Mar 3 13:51:36.057006 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:48220.service: Deactivated successfully. Mar 3 13:51:36.074048 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 13:51:36.085447 systemd-logind[1535]: Removed session 25. Mar 3 13:51:41.084457 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:44544.service - OpenSSH per-connection server daemon (10.0.0.1:44544). Mar 3 13:51:41.319400 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 44544 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:41.338392 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:41.409446 systemd-logind[1535]: New session 26 of user core. Mar 3 13:51:41.452993 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 13:51:42.333228 sshd[4404]: Connection closed by 10.0.0.1 port 44544 Mar 3 13:51:42.329615 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:42.368661 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:44544.service: Deactivated successfully. Mar 3 13:51:42.373865 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 13:51:42.384399 systemd-logind[1535]: Session 26 logged out. Waiting for processes to exit. Mar 3 13:51:42.389328 systemd-logind[1535]: Removed session 26. Mar 3 13:51:42.542491 kubelet[2706]: E0303 13:51:42.540994 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:51:47.361487 systemd[1]: Started sshd@26-10.0.0.100:22-10.0.0.1:44566.service - OpenSSH per-connection server daemon (10.0.0.1:44566). Mar 3 13:51:47.508043 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 44566 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:47.511968 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:47.534288 systemd-logind[1535]: New session 27 of user core. Mar 3 13:51:47.555362 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 3 13:51:47.957373 sshd[4423]: Connection closed by 10.0.0.1 port 44566 Mar 3 13:51:47.967513 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:47.992255 systemd[1]: sshd@26-10.0.0.100:22-10.0.0.1:44566.service: Deactivated successfully. Mar 3 13:51:47.992524 systemd-logind[1535]: Session 27 logged out. Waiting for processes to exit. Mar 3 13:51:47.997666 systemd[1]: session-27.scope: Deactivated successfully. Mar 3 13:51:48.005999 systemd-logind[1535]: Removed session 27. Mar 3 13:51:52.991969 systemd[1]: Started sshd@27-10.0.0.100:22-10.0.0.1:34156.service - OpenSSH per-connection server daemon (10.0.0.1:34156). Mar 3 13:51:53.098339 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 34156 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:53.101549 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:53.123213 systemd-logind[1535]: New session 28 of user core. Mar 3 13:51:53.131434 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 3 13:51:53.349667 sshd[4441]: Connection closed by 10.0.0.1 port 34156 Mar 3 13:51:53.354496 sshd-session[4438]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:53.365655 systemd[1]: Started sshd@28-10.0.0.100:22-10.0.0.1:34172.service - OpenSSH per-connection server daemon (10.0.0.1:34172). Mar 3 13:51:53.367541 systemd[1]: sshd@27-10.0.0.100:22-10.0.0.1:34156.service: Deactivated successfully. Mar 3 13:51:53.375346 systemd[1]: session-28.scope: Deactivated successfully. Mar 3 13:51:53.392287 systemd-logind[1535]: Session 28 logged out. Waiting for processes to exit. Mar 3 13:51:53.401969 systemd-logind[1535]: Removed session 28. Mar 3 13:51:53.479285 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 34172 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:53.482485 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:53.493353 systemd-logind[1535]: New session 29 of user core. Mar 3 13:51:53.518533 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 3 13:51:53.735033 sshd[4458]: Connection closed by 10.0.0.1 port 34172 Mar 3 13:51:53.736400 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:53.748449 systemd[1]: sshd@28-10.0.0.100:22-10.0.0.1:34172.service: Deactivated successfully. Mar 3 13:51:53.752045 systemd[1]: session-29.scope: Deactivated successfully. Mar 3 13:51:53.753818 systemd-logind[1535]: Session 29 logged out. Waiting for processes to exit. Mar 3 13:51:53.759513 systemd[1]: Started sshd@29-10.0.0.100:22-10.0.0.1:34182.service - OpenSSH per-connection server daemon (10.0.0.1:34182). Mar 3 13:51:53.761695 systemd-logind[1535]: Removed session 29. Mar 3 13:51:53.878920 sshd[4470]: Accepted publickey for core from 10.0.0.1 port 34182 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:53.884325 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:53.917060 systemd-logind[1535]: New session 30 of user core. Mar 3 13:51:53.929345 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 3 13:51:54.155395 sshd[4473]: Connection closed by 10.0.0.1 port 34182 Mar 3 13:51:54.156475 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:54.167664 systemd[1]: sshd@29-10.0.0.100:22-10.0.0.1:34182.service: Deactivated successfully. Mar 3 13:51:54.172358 systemd[1]: session-30.scope: Deactivated successfully. Mar 3 13:51:54.174519 systemd-logind[1535]: Session 30 logged out. Waiting for processes to exit. Mar 3 13:51:54.179633 systemd-logind[1535]: Removed session 30. Mar 3 13:51:59.214771 systemd[1]: Started sshd@30-10.0.0.100:22-10.0.0.1:34218.service - OpenSSH per-connection server daemon (10.0.0.1:34218). Mar 3 13:51:59.370553 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 34218 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:51:59.373400 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:51:59.407007 systemd-logind[1535]: New session 31 of user core. Mar 3 13:51:59.416937 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 3 13:51:59.802364 sshd[4489]: Connection closed by 10.0.0.1 port 34218 Mar 3 13:51:59.811799 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Mar 3 13:51:59.831404 systemd[1]: sshd@30-10.0.0.100:22-10.0.0.1:34218.service: Deactivated successfully. Mar 3 13:51:59.836689 systemd[1]: session-31.scope: Deactivated successfully. Mar 3 13:51:59.840839 systemd-logind[1535]: Session 31 logged out. Waiting for processes to exit. Mar 3 13:51:59.848036 systemd-logind[1535]: Removed session 31. Mar 3 13:52:04.829800 systemd[1]: Started sshd@31-10.0.0.100:22-10.0.0.1:54894.service - OpenSSH per-connection server daemon (10.0.0.1:54894). Mar 3 13:52:05.034255 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 54894 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:05.037557 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:05.068488 systemd-logind[1535]: New session 32 of user core. Mar 3 13:52:05.090515 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 3 13:52:05.362210 sshd[4505]: Connection closed by 10.0.0.1 port 54894 Mar 3 13:52:05.363451 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:05.380643 systemd[1]: sshd@31-10.0.0.100:22-10.0.0.1:54894.service: Deactivated successfully. Mar 3 13:52:05.385805 systemd[1]: session-32.scope: Deactivated successfully. Mar 3 13:52:05.390814 systemd-logind[1535]: Session 32 logged out. Waiting for processes to exit. Mar 3 13:52:05.400661 systemd-logind[1535]: Removed session 32. Mar 3 13:52:06.547241 kubelet[2706]: E0303 13:52:06.546433 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:08.536207 kubelet[2706]: E0303 13:52:08.534739 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:10.429496 systemd[1]: Started sshd@32-10.0.0.100:22-10.0.0.1:48918.service - OpenSSH per-connection server daemon (10.0.0.1:48918). Mar 3 13:52:10.581889 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 48918 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:10.587764 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:10.618721 systemd-logind[1535]: New session 33 of user core. Mar 3 13:52:10.641665 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 3 13:52:10.970735 sshd[4521]: Connection closed by 10.0.0.1 port 48918 Mar 3 13:52:10.970396 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:10.978222 systemd[1]: sshd@32-10.0.0.100:22-10.0.0.1:48918.service: Deactivated successfully. Mar 3 13:52:10.990582 systemd[1]: session-33.scope: Deactivated successfully. Mar 3 13:52:11.007402 systemd-logind[1535]: Session 33 logged out. Waiting for processes to exit. Mar 3 13:52:11.016918 systemd-logind[1535]: Removed session 33. Mar 3 13:52:16.015451 systemd[1]: Started sshd@33-10.0.0.100:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). Mar 3 13:52:16.183586 sshd[4537]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:16.188678 sshd-session[4537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:16.225908 systemd-logind[1535]: New session 34 of user core. Mar 3 13:52:16.237349 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 3 13:52:16.656437 sshd[4540]: Connection closed by 10.0.0.1 port 48932 Mar 3 13:52:16.655727 sshd-session[4537]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:16.683457 systemd[1]: sshd@33-10.0.0.100:22-10.0.0.1:48932.service: Deactivated successfully. Mar 3 13:52:16.688747 systemd[1]: session-34.scope: Deactivated successfully. Mar 3 13:52:16.692919 systemd-logind[1535]: Session 34 logged out. Waiting for processes to exit. Mar 3 13:52:16.713287 systemd-logind[1535]: Removed session 34. Mar 3 13:52:18.535183 kubelet[2706]: E0303 13:52:18.532817 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:18.535183 kubelet[2706]: E0303 13:52:18.533698 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:21.728395 systemd[1]: Started sshd@34-10.0.0.100:22-10.0.0.1:35160.service - OpenSSH per-connection server daemon (10.0.0.1:35160). Mar 3 13:52:21.884552 sshd[4554]: Accepted publickey for core from 10.0.0.1 port 35160 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:21.888189 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:21.915223 systemd-logind[1535]: New session 35 of user core. Mar 3 13:52:21.935373 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 3 13:52:22.381570 sshd[4557]: Connection closed by 10.0.0.1 port 35160 Mar 3 13:52:22.382486 sshd-session[4554]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:22.393709 systemd[1]: sshd@34-10.0.0.100:22-10.0.0.1:35160.service: Deactivated successfully. Mar 3 13:52:22.412697 systemd[1]: session-35.scope: Deactivated successfully. Mar 3 13:52:22.420481 systemd-logind[1535]: Session 35 logged out. Waiting for processes to exit. Mar 3 13:52:22.435929 systemd-logind[1535]: Removed session 35. Mar 3 13:52:24.533917 kubelet[2706]: E0303 13:52:24.533735 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:27.435286 systemd[1]: Started sshd@35-10.0.0.100:22-10.0.0.1:35182.service - OpenSSH per-connection server daemon (10.0.0.1:35182). Mar 3 13:52:27.578927 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 35182 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:27.585599 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:27.615704 systemd-logind[1535]: New session 36 of user core. Mar 3 13:52:27.635891 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 3 13:52:27.870029 sshd[4573]: Connection closed by 10.0.0.1 port 35182 Mar 3 13:52:27.874184 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:27.893776 systemd[1]: sshd@35-10.0.0.100:22-10.0.0.1:35182.service: Deactivated successfully. Mar 3 13:52:27.898452 systemd[1]: session-36.scope: Deactivated successfully. Mar 3 13:52:27.909649 systemd-logind[1535]: Session 36 logged out. Waiting for processes to exit. Mar 3 13:52:27.920063 systemd-logind[1535]: Removed session 36. Mar 3 13:52:31.533523 kubelet[2706]: E0303 13:52:31.533303 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:32.906787 systemd[1]: Started sshd@36-10.0.0.100:22-10.0.0.1:44188.service - OpenSSH per-connection server daemon (10.0.0.1:44188). Mar 3 13:52:33.018137 sshd[4586]: Accepted publickey for core from 10.0.0.1 port 44188 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:33.023878 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:33.060602 systemd-logind[1535]: New session 37 of user core. Mar 3 13:52:33.077452 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 3 13:52:33.340611 sshd[4589]: Connection closed by 10.0.0.1 port 44188 Mar 3 13:52:33.341463 sshd-session[4586]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:33.354423 systemd[1]: sshd@36-10.0.0.100:22-10.0.0.1:44188.service: Deactivated successfully. Mar 3 13:52:33.363430 systemd[1]: session-37.scope: Deactivated successfully. Mar 3 13:52:33.372229 systemd-logind[1535]: Session 37 logged out. Waiting for processes to exit. Mar 3 13:52:33.378216 systemd-logind[1535]: Removed session 37. Mar 3 13:52:35.531737 kubelet[2706]: E0303 13:52:35.531294 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:38.360623 systemd[1]: Started sshd@37-10.0.0.100:22-10.0.0.1:44204.service - OpenSSH per-connection server daemon (10.0.0.1:44204). Mar 3 13:52:38.495288 sshd[4604]: Accepted publickey for core from 10.0.0.1 port 44204 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:38.499609 sshd-session[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:38.526900 systemd-logind[1535]: New session 38 of user core. Mar 3 13:52:38.537523 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 3 13:52:38.844693 sshd[4607]: Connection closed by 10.0.0.1 port 44204 Mar 3 13:52:38.846472 sshd-session[4604]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:38.873466 systemd[1]: sshd@37-10.0.0.100:22-10.0.0.1:44204.service: Deactivated successfully. Mar 3 13:52:38.880225 systemd[1]: session-38.scope: Deactivated successfully. Mar 3 13:52:38.884476 systemd-logind[1535]: Session 38 logged out. Waiting for processes to exit. Mar 3 13:52:38.893772 systemd[1]: Started sshd@38-10.0.0.100:22-10.0.0.1:44206.service - OpenSSH per-connection server daemon (10.0.0.1:44206). Mar 3 13:52:38.903499 systemd-logind[1535]: Removed session 38. Mar 3 13:52:39.027882 sshd[4622]: Accepted publickey for core from 10.0.0.1 port 44206 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:39.038616 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:39.072652 systemd-logind[1535]: New session 39 of user core. Mar 3 13:52:39.082587 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 3 13:52:40.135226 sshd[4625]: Connection closed by 10.0.0.1 port 44206 Mar 3 13:52:40.136602 sshd-session[4622]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:40.151489 systemd[1]: sshd@38-10.0.0.100:22-10.0.0.1:44206.service: Deactivated successfully. Mar 3 13:52:40.156460 systemd[1]: session-39.scope: Deactivated successfully. Mar 3 13:52:40.160691 systemd-logind[1535]: Session 39 logged out. Waiting for processes to exit. Mar 3 13:52:40.163487 systemd[1]: Started sshd@39-10.0.0.100:22-10.0.0.1:52878.service - OpenSSH per-connection server daemon (10.0.0.1:52878). Mar 3 13:52:40.170513 systemd-logind[1535]: Removed session 39. Mar 3 13:52:40.316898 sshd[4638]: Accepted publickey for core from 10.0.0.1 port 52878 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:40.324322 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:40.363060 systemd-logind[1535]: New session 40 of user core. Mar 3 13:52:40.384914 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 3 13:52:42.349492 sshd[4641]: Connection closed by 10.0.0.1 port 52878 Mar 3 13:52:42.354937 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:42.392686 systemd[1]: sshd@39-10.0.0.100:22-10.0.0.1:52878.service: Deactivated successfully. Mar 3 13:52:42.397401 systemd[1]: session-40.scope: Deactivated successfully. Mar 3 13:52:42.408192 systemd-logind[1535]: Session 40 logged out. Waiting for processes to exit. Mar 3 13:52:42.422717 systemd[1]: Started sshd@40-10.0.0.100:22-10.0.0.1:52882.service - OpenSSH per-connection server daemon (10.0.0.1:52882). Mar 3 13:52:42.434743 systemd-logind[1535]: Removed session 40. Mar 3 13:52:42.578399 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 52882 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:42.583444 sshd-session[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:42.628015 systemd-logind[1535]: New session 41 of user core. Mar 3 13:52:42.635312 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 3 13:52:43.276686 sshd[4665]: Connection closed by 10.0.0.1 port 52882 Mar 3 13:52:43.281711 sshd-session[4661]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:43.303355 systemd[1]: Started sshd@41-10.0.0.100:22-10.0.0.1:52894.service - OpenSSH per-connection server daemon (10.0.0.1:52894). Mar 3 13:52:43.316359 systemd[1]: sshd@40-10.0.0.100:22-10.0.0.1:52882.service: Deactivated successfully. Mar 3 13:52:43.321013 systemd[1]: session-41.scope: Deactivated successfully. Mar 3 13:52:43.324662 systemd-logind[1535]: Session 41 logged out. Waiting for processes to exit. Mar 3 13:52:43.336462 systemd-logind[1535]: Removed session 41. Mar 3 13:52:43.447714 sshd[4676]: Accepted publickey for core from 10.0.0.1 port 52894 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:43.452426 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:43.489958 systemd-logind[1535]: New session 42 of user core. Mar 3 13:52:43.517456 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 3 13:52:43.775750 sshd[4682]: Connection closed by 10.0.0.1 port 52894 Mar 3 13:52:43.776526 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:43.787664 systemd[1]: sshd@41-10.0.0.100:22-10.0.0.1:52894.service: Deactivated successfully. Mar 3 13:52:43.791522 systemd[1]: session-42.scope: Deactivated successfully. Mar 3 13:52:43.799493 systemd-logind[1535]: Session 42 logged out. Waiting for processes to exit. Mar 3 13:52:43.813411 systemd-logind[1535]: Removed session 42. Mar 3 13:52:48.807507 systemd[1]: Started sshd@42-10.0.0.100:22-10.0.0.1:52910.service - OpenSSH per-connection server daemon (10.0.0.1:52910). Mar 3 13:52:49.001816 sshd[4698]: Accepted publickey for core from 10.0.0.1 port 52910 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:49.014682 sshd-session[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:49.045059 systemd-logind[1535]: New session 43 of user core. Mar 3 13:52:49.059418 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 3 13:52:49.378221 sshd[4701]: Connection closed by 10.0.0.1 port 52910 Mar 3 13:52:49.377947 sshd-session[4698]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:49.389504 systemd[1]: sshd@42-10.0.0.100:22-10.0.0.1:52910.service: Deactivated successfully. Mar 3 13:52:49.393306 systemd[1]: session-43.scope: Deactivated successfully. Mar 3 13:52:49.398015 systemd-logind[1535]: Session 43 logged out. Waiting for processes to exit. Mar 3 13:52:49.420049 systemd-logind[1535]: Removed session 43. Mar 3 13:52:54.438441 systemd[1]: Started sshd@43-10.0.0.100:22-10.0.0.1:49470.service - OpenSSH per-connection server daemon (10.0.0.1:49470). Mar 3 13:52:54.596579 sshd[4715]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:52:54.599834 sshd-session[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:52:54.626431 systemd-logind[1535]: New session 44 of user core. Mar 3 13:52:54.648943 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 3 13:52:55.069520 sshd[4718]: Connection closed by 10.0.0.1 port 49470 Mar 3 13:52:55.071796 sshd-session[4715]: pam_unix(sshd:session): session closed for user core Mar 3 13:52:55.091067 systemd[1]: sshd@43-10.0.0.100:22-10.0.0.1:49470.service: Deactivated successfully. Mar 3 13:52:55.096693 systemd[1]: session-44.scope: Deactivated successfully. Mar 3 13:52:55.103547 systemd-logind[1535]: Session 44 logged out. Waiting for processes to exit. Mar 3 13:52:55.112773 systemd-logind[1535]: Removed session 44. Mar 3 13:53:00.094717 systemd[1]: Started sshd@44-10.0.0.100:22-10.0.0.1:35662.service - OpenSSH per-connection server daemon (10.0.0.1:35662). Mar 3 13:53:00.227235 sshd[4732]: Accepted publickey for core from 10.0.0.1 port 35662 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:00.227685 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:00.244377 systemd-logind[1535]: New session 45 of user core. Mar 3 13:53:00.257066 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 3 13:53:00.459169 sshd[4735]: Connection closed by 10.0.0.1 port 35662 Mar 3 13:53:00.460465 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:00.472637 systemd[1]: sshd@44-10.0.0.100:22-10.0.0.1:35662.service: Deactivated successfully. Mar 3 13:53:00.483977 systemd[1]: session-45.scope: Deactivated successfully. Mar 3 13:53:00.489325 systemd-logind[1535]: Session 45 logged out. Waiting for processes to exit. Mar 3 13:53:00.495577 systemd-logind[1535]: Removed session 45. Mar 3 13:53:04.532068 kubelet[2706]: E0303 13:53:04.531818 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:05.519766 systemd[1]: Started sshd@45-10.0.0.100:22-10.0.0.1:35694.service - OpenSSH per-connection server daemon (10.0.0.1:35694). Mar 3 13:53:05.680052 sshd[4749]: Accepted publickey for core from 10.0.0.1 port 35694 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:05.686569 sshd-session[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:05.723554 systemd-logind[1535]: New session 46 of user core. Mar 3 13:53:05.743410 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 3 13:53:06.071360 sshd[4752]: Connection closed by 10.0.0.1 port 35694 Mar 3 13:53:06.070611 sshd-session[4749]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:06.085351 systemd[1]: sshd@45-10.0.0.100:22-10.0.0.1:35694.service: Deactivated successfully. Mar 3 13:53:06.095225 systemd[1]: session-46.scope: Deactivated successfully. Mar 3 13:53:06.099062 systemd-logind[1535]: Session 46 logged out. Waiting for processes to exit. Mar 3 13:53:06.111047 systemd-logind[1535]: Removed session 46. Mar 3 13:53:11.114948 systemd[1]: Started sshd@46-10.0.0.100:22-10.0.0.1:48770.service - OpenSSH per-connection server daemon (10.0.0.1:48770). Mar 3 13:53:11.271755 sshd[4767]: Accepted publickey for core from 10.0.0.1 port 48770 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:11.274816 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:11.303553 systemd-logind[1535]: New session 47 of user core. Mar 3 13:53:11.316531 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 3 13:53:11.596331 sshd[4770]: Connection closed by 10.0.0.1 port 48770 Mar 3 13:53:11.595616 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:11.617822 systemd[1]: sshd@46-10.0.0.100:22-10.0.0.1:48770.service: Deactivated successfully. Mar 3 13:53:11.623562 systemd[1]: session-47.scope: Deactivated successfully. Mar 3 13:53:11.628276 systemd-logind[1535]: Session 47 logged out. Waiting for processes to exit. Mar 3 13:53:11.634720 systemd-logind[1535]: Removed session 47. Mar 3 13:53:13.535646 kubelet[2706]: E0303 13:53:13.535069 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:16.627297 systemd[1]: Started sshd@47-10.0.0.100:22-10.0.0.1:48798.service - OpenSSH per-connection server daemon (10.0.0.1:48798). Mar 3 13:53:16.750702 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 48798 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:16.754907 sshd-session[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:16.771489 systemd-logind[1535]: New session 48 of user core. Mar 3 13:53:16.786414 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 3 13:53:17.061642 sshd[4790]: Connection closed by 10.0.0.1 port 48798 Mar 3 13:53:17.059606 sshd-session[4787]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:17.072467 systemd[1]: sshd@47-10.0.0.100:22-10.0.0.1:48798.service: Deactivated successfully. Mar 3 13:53:17.077061 systemd[1]: session-48.scope: Deactivated successfully. Mar 3 13:53:17.086532 systemd-logind[1535]: Session 48 logged out. Waiting for processes to exit. Mar 3 13:53:17.092332 systemd-logind[1535]: Removed session 48. Mar 3 13:53:21.536236 kubelet[2706]: E0303 13:53:21.535807 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:22.102453 systemd[1]: Started sshd@48-10.0.0.100:22-10.0.0.1:44440.service - OpenSSH per-connection server daemon (10.0.0.1:44440). Mar 3 13:53:22.254148 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 44440 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:22.259920 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:22.303398 systemd-logind[1535]: New session 49 of user core. Mar 3 13:53:22.328839 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 3 13:53:22.539264 sshd[4807]: Connection closed by 10.0.0.1 port 44440 Mar 3 13:53:22.540469 sshd-session[4804]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:22.553268 systemd[1]: sshd@48-10.0.0.100:22-10.0.0.1:44440.service: Deactivated successfully. Mar 3 13:53:22.558392 systemd[1]: session-49.scope: Deactivated successfully. Mar 3 13:53:22.567284 systemd-logind[1535]: Session 49 logged out. Waiting for processes to exit. Mar 3 13:53:22.570778 systemd-logind[1535]: Removed session 49. Mar 3 13:53:27.593471 systemd[1]: Started sshd@49-10.0.0.100:22-10.0.0.1:44476.service - OpenSSH per-connection server daemon (10.0.0.1:44476). Mar 3 13:53:27.720611 sshd[4820]: Accepted publickey for core from 10.0.0.1 port 44476 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:27.729451 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:27.753268 systemd-logind[1535]: New session 50 of user core. Mar 3 13:53:27.758566 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 3 13:53:28.040491 sshd[4823]: Connection closed by 10.0.0.1 port 44476 Mar 3 13:53:28.040961 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:28.049656 systemd[1]: sshd@49-10.0.0.100:22-10.0.0.1:44476.service: Deactivated successfully. Mar 3 13:53:28.053804 systemd[1]: session-50.scope: Deactivated successfully. Mar 3 13:53:28.058316 systemd-logind[1535]: Session 50 logged out. Waiting for processes to exit. Mar 3 13:53:28.063927 systemd-logind[1535]: Removed session 50. Mar 3 13:53:32.242285 containerd[1553]: time="2026-03-03T13:53:32.208722151Z" level=warning msg="container event discarded" container=5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba type=CONTAINER_CREATED_EVENT Mar 3 13:53:32.242285 containerd[1553]: time="2026-03-03T13:53:32.242251659Z" level=warning msg="container event discarded" container=5f1ef4742c77e3acd218799d3145a7c27bd36eaa28dc8ba2af9c52885f0bc0ba type=CONTAINER_STARTED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287424241Z" level=warning msg="container event discarded" container=61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09 type=CONTAINER_CREATED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287545107Z" level=warning msg="container event discarded" container=61fe7f52153e7f858f25896d7c3c5fd2d47171f2dbcee88d449213c027c04f09 type=CONTAINER_STARTED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287565224Z" level=warning msg="container event discarded" container=f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77 type=CONTAINER_CREATED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287575283Z" level=warning msg="container event discarded" container=f5b17e2f8e3167c310f86da3644ed6f557a80698bafa9c82d38d8d3797f90b77 type=CONTAINER_STARTED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287586083Z" level=warning msg="container event discarded" container=3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500 type=CONTAINER_CREATED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287595060Z" level=warning msg="container event discarded" container=d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a type=CONTAINER_CREATED_EVENT Mar 3 13:53:32.287598 containerd[1553]: time="2026-03-03T13:53:32.287603826Z" level=warning msg="container event discarded" container=95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400 type=CONTAINER_CREATED_EVENT Mar 3 13:53:32.362416 containerd[1553]: time="2026-03-03T13:53:32.361295302Z" level=warning msg="container event discarded" container=3dcfba4a30a49f8ea0677acd64eb08b8e4f8b92f041b250021793cdfb47bf500 type=CONTAINER_STARTED_EVENT Mar 3 13:53:32.401938 containerd[1553]: time="2026-03-03T13:53:32.401573152Z" level=warning msg="container event discarded" container=d4323885ea3c998345e063830892ec9c87ae5e44475f6e02c5af3de65375993a type=CONTAINER_STARTED_EVENT Mar 3 13:53:32.428369 containerd[1553]: time="2026-03-03T13:53:32.424725033Z" level=warning msg="container event discarded" container=95404e0003355e8312a20cdc9f91a6eff948fd9be58b5e33f5470df7ae066400 type=CONTAINER_STARTED_EVENT Mar 3 13:53:32.550710 kubelet[2706]: E0303 13:53:32.541360 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:33.073271 systemd[1]: Started sshd@50-10.0.0.100:22-10.0.0.1:56404.service - OpenSSH per-connection server daemon (10.0.0.1:56404). Mar 3 13:53:33.300518 sshd[4837]: Accepted publickey for core from 10.0.0.1 port 56404 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:33.310755 sshd-session[4837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:33.363985 systemd-logind[1535]: New session 51 of user core. Mar 3 13:53:33.408336 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 3 13:53:34.146411 sshd[4840]: Connection closed by 10.0.0.1 port 56404 Mar 3 13:53:34.145722 sshd-session[4837]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:34.178471 systemd[1]: sshd@50-10.0.0.100:22-10.0.0.1:56404.service: Deactivated successfully. Mar 3 13:53:34.187481 systemd[1]: session-51.scope: Deactivated successfully. Mar 3 13:53:34.199638 systemd-logind[1535]: Session 51 logged out. Waiting for processes to exit. Mar 3 13:53:34.231847 systemd[1]: Started sshd@51-10.0.0.100:22-10.0.0.1:56408.service - OpenSSH per-connection server daemon (10.0.0.1:56408). Mar 3 13:53:34.242989 systemd-logind[1535]: Removed session 51. Mar 3 13:53:34.485458 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 56408 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:34.489487 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:34.581726 systemd-logind[1535]: New session 52 of user core. Mar 3 13:53:34.594520 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 3 13:53:38.563955 kubelet[2706]: E0303 13:53:38.563906 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:41.400605 containerd[1553]: time="2026-03-03T13:53:41.400431279Z" level=info msg="StopContainer for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" with timeout 30 (s)" Mar 3 13:53:41.411904 containerd[1553]: time="2026-03-03T13:53:41.411858225Z" level=info msg="Stop container \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" with signal terminated" Mar 3 13:53:41.564782 systemd[1]: cri-containerd-914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68.scope: Deactivated successfully. Mar 3 13:53:41.565583 systemd[1]: cri-containerd-914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68.scope: Consumed 1.514s CPU time, 28.5M memory peak, 4K written to disk. Mar 3 13:53:41.583768 containerd[1553]: time="2026-03-03T13:53:41.583725404Z" level=info msg="StopContainer for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" with timeout 2 (s)" Mar 3 13:53:41.584917 containerd[1553]: time="2026-03-03T13:53:41.584820456Z" level=info msg="Stop container \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" with signal terminated" Mar 3 13:53:41.593204 containerd[1553]: time="2026-03-03T13:53:41.592980870Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:53:41.595613 containerd[1553]: time="2026-03-03T13:53:41.595497485Z" level=info msg="received container exit event container_id:\"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" id:\"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" pid:3480 exited_at:{seconds:1772546021 nanos:581006181}" Mar 3 13:53:41.666813 systemd-networkd[1461]: lxc_health: Link DOWN Mar 3 13:53:41.667341 systemd-networkd[1461]: lxc_health: Lost carrier Mar 3 13:53:41.764332 systemd[1]: cri-containerd-d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed.scope: Deactivated successfully. Mar 3 13:53:41.764786 systemd[1]: cri-containerd-d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed.scope: Consumed 13.898s CPU time, 124M memory peak, 220K read from disk, 13.3M written to disk. Mar 3 13:53:41.770737 containerd[1553]: time="2026-03-03T13:53:41.769762099Z" level=info msg="received container exit event container_id:\"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" id:\"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" pid:3337 exited_at:{seconds:1772546021 nanos:769284076}" Mar 3 13:53:41.785951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68-rootfs.mount: Deactivated successfully. Mar 3 13:53:41.881307 containerd[1553]: time="2026-03-03T13:53:41.880919817Z" level=info msg="StopContainer for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" returns successfully" Mar 3 13:53:41.886940 containerd[1553]: time="2026-03-03T13:53:41.886460102Z" level=info msg="StopPodSandbox for \"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\"" Mar 3 13:53:41.890417 containerd[1553]: time="2026-03-03T13:53:41.890365739Z" level=info msg="Container to stop \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:53:41.894735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed-rootfs.mount: Deactivated successfully. Mar 3 13:53:41.932707 systemd[1]: cri-containerd-77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b.scope: Deactivated successfully. Mar 3 13:53:41.950542 containerd[1553]: time="2026-03-03T13:53:41.946386826Z" level=info msg="StopContainer for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" returns successfully" Mar 3 13:53:41.954844 containerd[1553]: time="2026-03-03T13:53:41.954648254Z" level=info msg="StopPodSandbox for \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\"" Mar 3 13:53:41.954844 containerd[1553]: time="2026-03-03T13:53:41.954801019Z" level=info msg="Container to stop \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:53:41.954844 containerd[1553]: time="2026-03-03T13:53:41.954818662Z" level=info msg="Container to stop \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:53:41.954844 containerd[1553]: time="2026-03-03T13:53:41.954831456Z" level=info msg="Container to stop \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:53:41.954844 containerd[1553]: time="2026-03-03T13:53:41.954845592Z" level=info msg="Container to stop \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:53:41.955349 containerd[1553]: time="2026-03-03T13:53:41.954858106Z" level=info msg="Container to stop \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 3 13:53:41.956728 containerd[1553]: time="2026-03-03T13:53:41.956488405Z" level=info msg="received sandbox exit event container_id:\"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" id:\"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" exit_status:137 exited_at:{seconds:1772546021 nanos:955957985}" monitor_name=podsandbox Mar 3 13:53:41.996399 systemd[1]: cri-containerd-799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6.scope: Deactivated successfully. Mar 3 13:53:42.002200 containerd[1553]: time="2026-03-03T13:53:42.001892554Z" level=info msg="received sandbox exit event container_id:\"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" id:\"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" exit_status:137 exited_at:{seconds:1772546022 nanos:1408451}" monitor_name=podsandbox Mar 3 13:53:42.034915 kubelet[2706]: E0303 13:53:42.033906 2706 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:53:42.093657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b-rootfs.mount: Deactivated successfully. Mar 3 13:53:42.154001 containerd[1553]: time="2026-03-03T13:53:42.152716472Z" level=info msg="shim disconnected" id=77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b namespace=k8s.io Mar 3 13:53:42.154001 containerd[1553]: time="2026-03-03T13:53:42.153509221Z" level=warning msg="cleaning up after shim disconnected" id=77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b namespace=k8s.io Mar 3 13:53:42.154001 containerd[1553]: time="2026-03-03T13:53:42.153607935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 13:53:42.211453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6-rootfs.mount: Deactivated successfully. Mar 3 13:53:42.252553 containerd[1553]: time="2026-03-03T13:53:42.243945720Z" level=info msg="shim disconnected" id=799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6 namespace=k8s.io Mar 3 13:53:42.252553 containerd[1553]: time="2026-03-03T13:53:42.244383527Z" level=warning msg="cleaning up after shim disconnected" id=799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6 namespace=k8s.io Mar 3 13:53:42.252553 containerd[1553]: time="2026-03-03T13:53:42.244400919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 3 13:53:42.344545 containerd[1553]: time="2026-03-03T13:53:42.343499571Z" level=info msg="TearDown network for sandbox \"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" successfully" Mar 3 13:53:42.344545 containerd[1553]: time="2026-03-03T13:53:42.343543544Z" level=info msg="StopPodSandbox for \"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" returns successfully" Mar 3 13:53:42.348238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b-shm.mount: Deactivated successfully. Mar 3 13:53:42.348441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6-shm.mount: Deactivated successfully. Mar 3 13:53:42.350602 containerd[1553]: time="2026-03-03T13:53:42.350451481Z" level=info msg="TearDown network for sandbox \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" successfully" Mar 3 13:53:42.350602 containerd[1553]: time="2026-03-03T13:53:42.350547370Z" level=info msg="StopPodSandbox for \"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" returns successfully" Mar 3 13:53:42.373835 containerd[1553]: time="2026-03-03T13:53:42.370883399Z" level=info msg="received sandbox container exit event sandbox_id:\"77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b\" exit_status:137 exited_at:{seconds:1772546021 nanos:955957985}" monitor_name=criService Mar 3 13:53:42.373835 containerd[1553]: time="2026-03-03T13:53:42.371679324Z" level=info msg="received sandbox container exit event sandbox_id:\"799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6\" exit_status:137 exited_at:{seconds:1772546022 nanos:1408451}" monitor_name=criService Mar 3 13:53:42.561253 kubelet[2706]: I0303 13:53:42.559907 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-hubble-tls\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.561253 kubelet[2706]: I0303 13:53:42.559946 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-xtables-lock\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.561253 kubelet[2706]: I0303 13:53:42.559974 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq8tb\" (UniqueName: \"kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-kube-api-access-jq8tb\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.561253 kubelet[2706]: I0303 13:53:42.559994 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-lib-modules\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.563960 kubelet[2706]: I0303 13:53:42.560012 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-hostproc\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.564956 kubelet[2706]: I0303 13:53:42.564927 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-bpf-maps\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571542 kubelet[2706]: I0303 13:53:42.565471 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d3e108f-d470-4a51-a148-0de592291451-clustermesh-secrets\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571542 kubelet[2706]: I0303 13:53:42.565496 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-cgroup\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571542 kubelet[2706]: I0303 13:53:42.565638 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-etc-cni-netd\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571542 kubelet[2706]: I0303 13:53:42.565659 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-run\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571542 kubelet[2706]: I0303 13:53:42.565675 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cni-path\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571542 kubelet[2706]: I0303 13:53:42.565698 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-net\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571779 kubelet[2706]: I0303 13:53:42.565726 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d3e108f-d470-4a51-a148-0de592291451-cilium-config-path\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571779 kubelet[2706]: I0303 13:53:42.565747 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzn8l\" (UniqueName: \"kubernetes.io/projected/9a8f56fc-7b44-4a86-8e11-61df2076802e-kube-api-access-pzn8l\") pod \"9a8f56fc-7b44-4a86-8e11-61df2076802e\" (UID: \"9a8f56fc-7b44-4a86-8e11-61df2076802e\") " Mar 3 13:53:42.571779 kubelet[2706]: I0303 13:53:42.565769 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-kernel\") pod \"1d3e108f-d470-4a51-a148-0de592291451\" (UID: \"1d3e108f-d470-4a51-a148-0de592291451\") " Mar 3 13:53:42.571779 kubelet[2706]: I0303 13:53:42.565789 2706 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8f56fc-7b44-4a86-8e11-61df2076802e-cilium-config-path\") pod \"9a8f56fc-7b44-4a86-8e11-61df2076802e\" (UID: \"9a8f56fc-7b44-4a86-8e11-61df2076802e\") " Mar 3 13:53:42.571779 kubelet[2706]: I0303 13:53:42.563725 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-hostproc" (OuterVolumeSpecName: "hostproc") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.571945 kubelet[2706]: I0303 13:53:42.563905 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.571945 kubelet[2706]: I0303 13:53:42.565287 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.571945 kubelet[2706]: I0303 13:53:42.565317 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.571945 kubelet[2706]: I0303 13:53:42.566347 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.573749 kubelet[2706]: I0303 13:53:42.573721 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cni-path" (OuterVolumeSpecName: "cni-path") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.573862 kubelet[2706]: I0303 13:53:42.573844 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.573952 kubelet[2706]: I0303 13:53:42.573935 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.574934 kubelet[2706]: I0303 13:53:42.574910 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.583447 kubelet[2706]: I0303 13:53:42.583420 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-kube-api-access-jq8tb" (OuterVolumeSpecName: "kube-api-access-jq8tb") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "kube-api-access-jq8tb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:53:42.584359 kubelet[2706]: I0303 13:53:42.584321 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 3 13:53:42.589443 kubelet[2706]: I0303 13:53:42.585882 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a8f56fc-7b44-4a86-8e11-61df2076802e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a8f56fc-7b44-4a86-8e11-61df2076802e" (UID: "9a8f56fc-7b44-4a86-8e11-61df2076802e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:53:42.593941 kubelet[2706]: I0303 13:53:42.590014 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3e108f-d470-4a51-a148-0de592291451-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 13:53:42.599372 kubelet[2706]: I0303 13:53:42.597935 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:53:42.601949 kubelet[2706]: I0303 13:53:42.601898 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3e108f-d470-4a51-a148-0de592291451-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1d3e108f-d470-4a51-a148-0de592291451" (UID: "1d3e108f-d470-4a51-a148-0de592291451"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:53:42.607322 kubelet[2706]: I0303 13:53:42.603985 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8f56fc-7b44-4a86-8e11-61df2076802e-kube-api-access-pzn8l" (OuterVolumeSpecName: "kube-api-access-pzn8l") pod "9a8f56fc-7b44-4a86-8e11-61df2076802e" (UID: "9a8f56fc-7b44-4a86-8e11-61df2076802e"). InnerVolumeSpecName "kube-api-access-pzn8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:53:42.625594 kubelet[2706]: I0303 13:53:42.624727 2706 scope.go:117] "RemoveContainer" containerID="d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed" Mar 3 13:53:42.648490 containerd[1553]: time="2026-03-03T13:53:42.647453306Z" level=info msg="RemoveContainer for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\"" Mar 3 13:53:42.664947 systemd[1]: Removed slice kubepods-burstable-pod1d3e108f_d470_4a51_a148_0de592291451.slice - libcontainer container kubepods-burstable-pod1d3e108f_d470_4a51_a148_0de592291451.slice. Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.665971 2706 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.665991 2706 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.666000 2706 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1d3e108f-d470-4a51-a148-0de592291451-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.666009 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.666151 2706 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.666164 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.666171 2706 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666016 kubelet[2706]: I0303 13:53:42.666179 2706 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666189 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1d3e108f-d470-4a51-a148-0de592291451-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666196 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pzn8l\" (UniqueName: \"kubernetes.io/projected/9a8f56fc-7b44-4a86-8e11-61df2076802e-kube-api-access-pzn8l\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666203 2706 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666210 2706 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a8f56fc-7b44-4a86-8e11-61df2076802e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666218 2706 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666225 2706 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666233 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jq8tb\" (UniqueName: \"kubernetes.io/projected/1d3e108f-d470-4a51-a148-0de592291451-kube-api-access-jq8tb\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.666535 kubelet[2706]: I0303 13:53:42.666241 2706 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d3e108f-d470-4a51-a148-0de592291451-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 3 13:53:42.671499 systemd[1]: kubepods-burstable-pod1d3e108f_d470_4a51_a148_0de592291451.slice: Consumed 14.223s CPU time, 124.4M memory peak, 224K read from disk, 13.3M written to disk. Mar 3 13:53:42.673223 containerd[1553]: time="2026-03-03T13:53:42.672418590Z" level=info msg="RemoveContainer for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" returns successfully" Mar 3 13:53:42.673299 kubelet[2706]: I0303 13:53:42.672922 2706 scope.go:117] "RemoveContainer" containerID="b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b" Mar 3 13:53:42.747605 systemd[1]: Removed slice kubepods-besteffort-pod9a8f56fc_7b44_4a86_8e11_61df2076802e.slice - libcontainer container kubepods-besteffort-pod9a8f56fc_7b44_4a86_8e11_61df2076802e.slice. Mar 3 13:53:42.747736 systemd[1]: kubepods-besteffort-pod9a8f56fc_7b44_4a86_8e11_61df2076802e.slice: Consumed 1.562s CPU time, 28.7M memory peak, 4K written to disk. Mar 3 13:53:42.756292 containerd[1553]: time="2026-03-03T13:53:42.756250233Z" level=info msg="RemoveContainer for \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\"" Mar 3 13:53:42.786767 systemd[1]: var-lib-kubelet-pods-9a8f56fc\x2d7b44\x2d4a86\x2d8e11\x2d61df2076802e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpzn8l.mount: Deactivated successfully. Mar 3 13:53:42.787338 systemd[1]: var-lib-kubelet-pods-1d3e108f\x2dd470\x2d4a51\x2da148\x2d0de592291451-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djq8tb.mount: Deactivated successfully. Mar 3 13:53:42.787652 systemd[1]: var-lib-kubelet-pods-1d3e108f\x2dd470\x2d4a51\x2da148\x2d0de592291451-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 3 13:53:42.787768 systemd[1]: var-lib-kubelet-pods-1d3e108f\x2dd470\x2d4a51\x2da148\x2d0de592291451-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 3 13:53:42.797262 containerd[1553]: time="2026-03-03T13:53:42.797186832Z" level=info msg="RemoveContainer for \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\" returns successfully" Mar 3 13:53:42.798266 kubelet[2706]: I0303 13:53:42.797457 2706 scope.go:117] "RemoveContainer" containerID="5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0" Mar 3 13:53:42.820732 containerd[1553]: time="2026-03-03T13:53:42.817993131Z" level=info msg="RemoveContainer for \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\"" Mar 3 13:53:42.834827 containerd[1553]: time="2026-03-03T13:53:42.834749928Z" level=info msg="RemoveContainer for \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\" returns successfully" Mar 3 13:53:42.837931 kubelet[2706]: I0303 13:53:42.834969 2706 scope.go:117] "RemoveContainer" containerID="3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda" Mar 3 13:53:42.853954 containerd[1553]: time="2026-03-03T13:53:42.853639066Z" level=info msg="RemoveContainer for \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\"" Mar 3 13:53:42.865248 containerd[1553]: time="2026-03-03T13:53:42.863999125Z" level=info msg="RemoveContainer for \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\" returns successfully" Mar 3 13:53:42.865330 kubelet[2706]: I0303 13:53:42.864430 2706 scope.go:117] "RemoveContainer" containerID="97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21" Mar 3 13:53:42.867818 containerd[1553]: time="2026-03-03T13:53:42.867178786Z" level=info msg="RemoveContainer for \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\"" Mar 3 13:53:42.880734 containerd[1553]: time="2026-03-03T13:53:42.880468138Z" level=info msg="RemoveContainer for \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\" returns successfully" Mar 3 13:53:42.880822 kubelet[2706]: I0303 13:53:42.880741 2706 scope.go:117] "RemoveContainer" containerID="d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed" Mar 3 13:53:42.901441 kubelet[2706]: I0303 13:53:42.901394 2706 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-03T13:53:42Z","lastTransitionTime":"2026-03-03T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 3 13:53:42.923451 containerd[1553]: time="2026-03-03T13:53:42.882938613Z" level=error msg="ContainerStatus for \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\": not found" Mar 3 13:53:42.923597 kubelet[2706]: E0303 13:53:42.922897 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\": not found" containerID="d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed" Mar 3 13:53:42.923597 kubelet[2706]: I0303 13:53:42.922945 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed"} err="failed to get container status \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed\": not found" Mar 3 13:53:42.923597 kubelet[2706]: I0303 13:53:42.922991 2706 scope.go:117] "RemoveContainer" containerID="b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b" Mar 3 13:53:42.923728 containerd[1553]: time="2026-03-03T13:53:42.923665124Z" level=error msg="ContainerStatus for \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\": not found" Mar 3 13:53:42.923957 kubelet[2706]: E0303 13:53:42.923787 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\": not found" containerID="b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b" Mar 3 13:53:42.923957 kubelet[2706]: I0303 13:53:42.923814 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b"} err="failed to get container status \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b\": not found" Mar 3 13:53:42.923957 kubelet[2706]: I0303 13:53:42.923831 2706 scope.go:117] "RemoveContainer" containerID="5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0" Mar 3 13:53:42.926455 sshd[4856]: Connection closed by 10.0.0.1 port 56408 Mar 3 13:53:42.929465 containerd[1553]: time="2026-03-03T13:53:42.926326450Z" level=error msg="ContainerStatus for \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\": not found" Mar 3 13:53:42.929465 containerd[1553]: time="2026-03-03T13:53:42.927349989Z" level=error msg="ContainerStatus for \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\": not found" Mar 3 13:53:42.926249 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:42.929875 kubelet[2706]: E0303 13:53:42.926684 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\": not found" containerID="5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0" Mar 3 13:53:42.929875 kubelet[2706]: I0303 13:53:42.926710 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0"} err="failed to get container status \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0\": not found" Mar 3 13:53:42.929875 kubelet[2706]: I0303 13:53:42.926727 2706 scope.go:117] "RemoveContainer" containerID="3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda" Mar 3 13:53:42.929875 kubelet[2706]: E0303 13:53:42.927477 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\": not found" containerID="3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda" Mar 3 13:53:42.929875 kubelet[2706]: I0303 13:53:42.927503 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda"} err="failed to get container status \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda\": not found" Mar 3 13:53:42.929875 kubelet[2706]: I0303 13:53:42.927523 2706 scope.go:117] "RemoveContainer" containerID="97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21" Mar 3 13:53:42.935395 kubelet[2706]: E0303 13:53:42.930424 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\": not found" containerID="97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21" Mar 3 13:53:42.935395 kubelet[2706]: I0303 13:53:42.930449 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21"} err="failed to get container status \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\": rpc error: code = NotFound desc = an error occurred when try to find container \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\": not found" Mar 3 13:53:42.935395 kubelet[2706]: I0303 13:53:42.930470 2706 scope.go:117] "RemoveContainer" containerID="914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68" Mar 3 13:53:42.935487 containerd[1553]: time="2026-03-03T13:53:42.929886432Z" level=error msg="ContainerStatus for \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21\": not found" Mar 3 13:53:42.940626 containerd[1553]: time="2026-03-03T13:53:42.939972988Z" level=info msg="RemoveContainer for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\"" Mar 3 13:53:42.956760 containerd[1553]: time="2026-03-03T13:53:42.954741839Z" level=info msg="RemoveContainer for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" returns successfully" Mar 3 13:53:42.958419 kubelet[2706]: I0303 13:53:42.958394 2706 scope.go:117] "RemoveContainer" containerID="914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68" Mar 3 13:53:42.959437 kubelet[2706]: E0303 13:53:42.958805 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\": not found" containerID="914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68" Mar 3 13:53:42.959437 kubelet[2706]: I0303 13:53:42.958836 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68"} err="failed to get container status \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\": rpc error: code = NotFound desc = an error occurred when try to find container \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\": not found" Mar 3 13:53:42.959554 containerd[1553]: time="2026-03-03T13:53:42.958660751Z" level=error msg="ContainerStatus for \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68\": not found" Mar 3 13:53:42.960390 systemd[1]: sshd@51-10.0.0.100:22-10.0.0.1:56408.service: Deactivated successfully. Mar 3 13:53:42.968015 systemd[1]: session-52.scope: Deactivated successfully. Mar 3 13:53:42.968642 systemd[1]: session-52.scope: Consumed 2.627s CPU time, 26.8M memory peak. Mar 3 13:53:42.979624 systemd-logind[1535]: Session 52 logged out. Waiting for processes to exit. Mar 3 13:53:42.989256 systemd[1]: Started sshd@52-10.0.0.100:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). Mar 3 13:53:42.998267 systemd-logind[1535]: Removed session 52. Mar 3 13:53:43.177800 sshd[5006]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:43.182681 sshd-session[5006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:43.203835 systemd-logind[1535]: New session 53 of user core. Mar 3 13:53:43.225927 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 3 13:53:44.386657 containerd[1553]: time="2026-03-03T13:53:44.386331150Z" level=warning msg="container event discarded" container=799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6 type=CONTAINER_CREATED_EVENT Mar 3 13:53:44.386657 containerd[1553]: time="2026-03-03T13:53:44.386415728Z" level=warning msg="container event discarded" container=799727023f44c1b7e6041e6e65e77d24bb651f2deff70094b0aa606ab5c70fd6 type=CONTAINER_STARTED_EVENT Mar 3 13:53:44.423702 containerd[1553]: time="2026-03-03T13:53:44.419794400Z" level=warning msg="container event discarded" container=0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b type=CONTAINER_CREATED_EVENT Mar 3 13:53:44.423702 containerd[1553]: time="2026-03-03T13:53:44.419851446Z" level=warning msg="container event discarded" container=0521ae9b8f7b95716fb415d4bf453d54856c10f0695f3b68eb7e31e9d001530b type=CONTAINER_STARTED_EVENT Mar 3 13:53:44.491451 containerd[1553]: time="2026-03-03T13:53:44.491385292Z" level=warning msg="container event discarded" container=12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9 type=CONTAINER_CREATED_EVENT Mar 3 13:53:44.551526 kubelet[2706]: I0303 13:53:44.551322 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3e108f-d470-4a51-a148-0de592291451" path="/var/lib/kubelet/pods/1d3e108f-d470-4a51-a148-0de592291451/volumes" Mar 3 13:53:44.560300 kubelet[2706]: I0303 13:53:44.553017 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a8f56fc-7b44-4a86-8e11-61df2076802e" path="/var/lib/kubelet/pods/9a8f56fc-7b44-4a86-8e11-61df2076802e/volumes" Mar 3 13:53:44.561201 containerd[1553]: time="2026-03-03T13:53:44.560806461Z" level=warning msg="container event discarded" container=77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b type=CONTAINER_CREATED_EVENT Mar 3 13:53:44.561524 containerd[1553]: time="2026-03-03T13:53:44.561339215Z" level=warning msg="container event discarded" container=77c66afe0f1d3de6ea79f3060a43139bbdbae64ec2896ff29b4e6aeb3664695b type=CONTAINER_STARTED_EVENT Mar 3 13:53:44.720690 containerd[1553]: time="2026-03-03T13:53:44.715648227Z" level=warning msg="container event discarded" container=12874d58d4d0e61e9e5093c157e885548c421e5f4c14c44f08990cc39f111fc9 type=CONTAINER_STARTED_EVENT Mar 3 13:53:47.264168 kubelet[2706]: E0303 13:53:47.256369 2706 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:53:47.830005 kubelet[2706]: E0303 13:53:47.820638 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:49.550233 kubelet[2706]: E0303 13:53:49.549360 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:49.572139 sshd[5009]: Connection closed by 10.0.0.1 port 43032 Mar 3 13:53:49.553256 sshd-session[5006]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:49.610773 systemd[1]: sshd@52-10.0.0.100:22-10.0.0.1:43032.service: Deactivated successfully. Mar 3 13:53:49.648496 systemd[1]: session-53.scope: Deactivated successfully. Mar 3 13:53:49.648906 systemd[1]: session-53.scope: Consumed 1.367s CPU time, 26.4M memory peak. Mar 3 13:53:49.656176 systemd-logind[1535]: Session 53 logged out. Waiting for processes to exit. Mar 3 13:53:49.662319 systemd-logind[1535]: Removed session 53. Mar 3 13:53:49.667690 systemd[1]: Started sshd@53-10.0.0.100:22-10.0.0.1:43050.service - OpenSSH per-connection server daemon (10.0.0.1:43050). Mar 3 13:53:49.879569 systemd[1]: Created slice kubepods-burstable-pod132494dd_d8c8_4721_ba82_983eeccc61b0.slice - libcontainer container kubepods-burstable-pod132494dd_d8c8_4721_ba82_983eeccc61b0.slice. Mar 3 13:53:49.880699 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 43050 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:49.883815 sshd-session[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:49.930259 kubelet[2706]: I0303 13:53:49.929834 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-lib-modules\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.930259 kubelet[2706]: I0303 13:53:49.930014 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-host-proc-sys-net\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.939945 kubelet[2706]: I0303 13:53:49.932422 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-host-proc-sys-kernel\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.939945 kubelet[2706]: I0303 13:53:49.932528 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/132494dd-d8c8-4721-ba82-983eeccc61b0-cilium-ipsec-secrets\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.939945 kubelet[2706]: I0303 13:53:49.932555 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c25vt\" (UniqueName: \"kubernetes.io/projected/132494dd-d8c8-4721-ba82-983eeccc61b0-kube-api-access-c25vt\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.939945 kubelet[2706]: I0303 13:53:49.932588 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-etc-cni-netd\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.939945 kubelet[2706]: I0303 13:53:49.932615 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-xtables-lock\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942281 kubelet[2706]: I0303 13:53:49.932641 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-cilium-run\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942281 kubelet[2706]: I0303 13:53:49.932665 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-bpf-maps\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942281 kubelet[2706]: I0303 13:53:49.932688 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/132494dd-d8c8-4721-ba82-983eeccc61b0-hubble-tls\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942281 kubelet[2706]: I0303 13:53:49.932715 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-cni-path\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942281 kubelet[2706]: I0303 13:53:49.932813 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/132494dd-d8c8-4721-ba82-983eeccc61b0-clustermesh-secrets\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942281 kubelet[2706]: I0303 13:53:49.932837 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/132494dd-d8c8-4721-ba82-983eeccc61b0-cilium-config-path\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942537 kubelet[2706]: I0303 13:53:49.932865 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-hostproc\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.942537 kubelet[2706]: I0303 13:53:49.932890 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/132494dd-d8c8-4721-ba82-983eeccc61b0-cilium-cgroup\") pod \"cilium-qf7bs\" (UID: \"132494dd-d8c8-4721-ba82-983eeccc61b0\") " pod="kube-system/cilium-qf7bs" Mar 3 13:53:49.945869 systemd-logind[1535]: New session 54 of user core. Mar 3 13:53:49.959190 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 3 13:53:50.033263 sshd[5026]: Connection closed by 10.0.0.1 port 43050 Mar 3 13:53:50.039189 sshd-session[5023]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:50.141997 systemd[1]: sshd@53-10.0.0.100:22-10.0.0.1:43050.service: Deactivated successfully. Mar 3 13:53:50.182789 systemd[1]: session-54.scope: Deactivated successfully. Mar 3 13:53:50.188670 systemd-logind[1535]: Session 54 logged out. Waiting for processes to exit. Mar 3 13:53:50.223376 kubelet[2706]: E0303 13:53:50.201163 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:50.230498 systemd[1]: Started sshd@54-10.0.0.100:22-10.0.0.1:54000.service - OpenSSH per-connection server daemon (10.0.0.1:54000). Mar 3 13:53:50.236553 containerd[1553]: time="2026-03-03T13:53:50.233310835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qf7bs,Uid:132494dd-d8c8-4721-ba82-983eeccc61b0,Namespace:kube-system,Attempt:0,}" Mar 3 13:53:50.251010 systemd-logind[1535]: Removed session 54. Mar 3 13:53:50.401442 containerd[1553]: time="2026-03-03T13:53:50.396961810Z" level=info msg="connecting to shim bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138" address="unix:///run/containerd/s/0363ae13cdbae4766bcb4686bb61cf4c81f2c81c6ddaabb73589752119fc9894" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:53:50.644780 sshd[5037]: Accepted publickey for core from 10.0.0.1 port 54000 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:53:50.644546 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:53:50.676204 systemd-logind[1535]: New session 55 of user core. Mar 3 13:53:50.744850 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 3 13:53:50.904340 systemd[1]: Started cri-containerd-bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138.scope - libcontainer container bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138. Mar 3 13:53:51.262538 containerd[1553]: time="2026-03-03T13:53:51.260689349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qf7bs,Uid:132494dd-d8c8-4721-ba82-983eeccc61b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\"" Mar 3 13:53:51.274162 kubelet[2706]: E0303 13:53:51.270535 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:51.287152 containerd[1553]: time="2026-03-03T13:53:51.286422085Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 3 13:53:51.429168 containerd[1553]: time="2026-03-03T13:53:51.428886702Z" level=info msg="Container 802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:51.431691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842226109.mount: Deactivated successfully. Mar 3 13:53:51.438037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944670331.mount: Deactivated successfully. Mar 3 13:53:51.483657 containerd[1553]: time="2026-03-03T13:53:51.483463838Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69\"" Mar 3 13:53:51.502450 containerd[1553]: time="2026-03-03T13:53:51.501863602Z" level=info msg="StartContainer for \"802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69\"" Mar 3 13:53:51.561618 containerd[1553]: time="2026-03-03T13:53:51.560667539Z" level=info msg="connecting to shim 802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69" address="unix:///run/containerd/s/0363ae13cdbae4766bcb4686bb61cf4c81f2c81c6ddaabb73589752119fc9894" protocol=ttrpc version=3 Mar 3 13:53:51.753353 systemd[1]: Started cri-containerd-802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69.scope - libcontainer container 802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69. Mar 3 13:53:51.902298 containerd[1553]: time="2026-03-03T13:53:51.900800737Z" level=info msg="StartContainer for \"802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69\" returns successfully" Mar 3 13:53:51.942695 systemd[1]: cri-containerd-802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69.scope: Deactivated successfully. Mar 3 13:53:51.947212 containerd[1553]: time="2026-03-03T13:53:51.947124801Z" level=info msg="received container exit event container_id:\"802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69\" id:\"802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69\" pid:5110 exited_at:{seconds:1772546031 nanos:945853467}" Mar 3 13:53:51.995243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-802763cc9d8c603002c60aa85dd38ab1c0f20fd1d624439a830b88bdb1400a69-rootfs.mount: Deactivated successfully. Mar 3 13:53:52.285963 kubelet[2706]: E0303 13:53:52.280360 2706 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:53:52.974219 kubelet[2706]: E0303 13:53:52.973833 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:52.992199 containerd[1553]: time="2026-03-03T13:53:52.991985757Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 3 13:53:53.043449 containerd[1553]: time="2026-03-03T13:53:53.043267447Z" level=info msg="Container 8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:53.049040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1122597110.mount: Deactivated successfully. Mar 3 13:53:53.152231 containerd[1553]: time="2026-03-03T13:53:53.150890387Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367\"" Mar 3 13:53:53.153784 containerd[1553]: time="2026-03-03T13:53:53.152635332Z" level=info msg="StartContainer for \"8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367\"" Mar 3 13:53:53.157492 containerd[1553]: time="2026-03-03T13:53:53.157392076Z" level=info msg="connecting to shim 8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367" address="unix:///run/containerd/s/0363ae13cdbae4766bcb4686bb61cf4c81f2c81c6ddaabb73589752119fc9894" protocol=ttrpc version=3 Mar 3 13:53:53.333863 systemd[1]: Started cri-containerd-8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367.scope - libcontainer container 8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367. Mar 3 13:53:53.469745 containerd[1553]: time="2026-03-03T13:53:53.467052126Z" level=info msg="StartContainer for \"8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367\" returns successfully" Mar 3 13:53:53.479376 systemd[1]: cri-containerd-8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367.scope: Deactivated successfully. Mar 3 13:53:53.481957 containerd[1553]: time="2026-03-03T13:53:53.481845846Z" level=info msg="received container exit event container_id:\"8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367\" id:\"8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367\" pid:5155 exited_at:{seconds:1772546033 nanos:481527302}" Mar 3 13:53:53.536343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c082781321d418d3b6e41c44cb24fe3666375f2fb7795cd09d5594ada7a9367-rootfs.mount: Deactivated successfully. Mar 3 13:53:53.985670 kubelet[2706]: E0303 13:53:53.985455 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:53.992473 containerd[1553]: time="2026-03-03T13:53:53.992427770Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 3 13:53:54.030156 containerd[1553]: time="2026-03-03T13:53:54.029018913Z" level=info msg="Container e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:54.040508 containerd[1553]: time="2026-03-03T13:53:54.040390068Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309\"" Mar 3 13:53:54.040783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739617189.mount: Deactivated successfully. Mar 3 13:53:54.041323 containerd[1553]: time="2026-03-03T13:53:54.040968513Z" level=info msg="StartContainer for \"e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309\"" Mar 3 13:53:54.044058 containerd[1553]: time="2026-03-03T13:53:54.043903509Z" level=info msg="connecting to shim e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309" address="unix:///run/containerd/s/0363ae13cdbae4766bcb4686bb61cf4c81f2c81c6ddaabb73589752119fc9894" protocol=ttrpc version=3 Mar 3 13:53:54.086389 systemd[1]: Started cri-containerd-e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309.scope - libcontainer container e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309. Mar 3 13:53:54.209511 systemd[1]: cri-containerd-e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309.scope: Deactivated successfully. Mar 3 13:53:54.213876 containerd[1553]: time="2026-03-03T13:53:54.213763728Z" level=info msg="received container exit event container_id:\"e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309\" id:\"e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309\" pid:5199 exited_at:{seconds:1772546034 nanos:212651658}" Mar 3 13:53:54.215167 containerd[1553]: time="2026-03-03T13:53:54.215044409Z" level=info msg="StartContainer for \"e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309\" returns successfully" Mar 3 13:53:54.259313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ab155e2aa0964aaa98dd59a5e157a15cdad00f1f9a5225ef7e52f512144309-rootfs.mount: Deactivated successfully. Mar 3 13:53:55.093565 kubelet[2706]: E0303 13:53:55.088469 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:55.160589 containerd[1553]: time="2026-03-03T13:53:55.160158246Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 3 13:53:55.345571 containerd[1553]: time="2026-03-03T13:53:55.342046434Z" level=info msg="Container dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:55.389834 containerd[1553]: time="2026-03-03T13:53:55.388569649Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550\"" Mar 3 13:53:55.397420 containerd[1553]: time="2026-03-03T13:53:55.397380873Z" level=info msg="StartContainer for \"dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550\"" Mar 3 13:53:55.414599 containerd[1553]: time="2026-03-03T13:53:55.413688518Z" level=info msg="connecting to shim dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550" address="unix:///run/containerd/s/0363ae13cdbae4766bcb4686bb61cf4c81f2c81c6ddaabb73589752119fc9894" protocol=ttrpc version=3 Mar 3 13:53:55.565418 systemd[1]: Started cri-containerd-dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550.scope - libcontainer container dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550. Mar 3 13:53:55.858694 systemd[1]: cri-containerd-dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550.scope: Deactivated successfully. Mar 3 13:53:55.867350 containerd[1553]: time="2026-03-03T13:53:55.865235029Z" level=info msg="received container exit event container_id:\"dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550\" id:\"dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550\" pid:5240 exited_at:{seconds:1772546035 nanos:857829313}" Mar 3 13:53:55.898866 containerd[1553]: time="2026-03-03T13:53:55.898682184Z" level=info msg="StartContainer for \"dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550\" returns successfully" Mar 3 13:53:56.104719 kubelet[2706]: E0303 13:53:56.104586 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:56.107762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbcd10214d06724bdfb5ab78068468a180269243df116ddc0506a81889734550-rootfs.mount: Deactivated successfully. Mar 3 13:53:57.292528 kubelet[2706]: E0303 13:53:57.289194 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:57.325492 kubelet[2706]: E0303 13:53:57.320213 2706 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 3 13:53:57.343188 containerd[1553]: time="2026-03-03T13:53:57.342647325Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 3 13:53:57.402395 containerd[1553]: time="2026-03-03T13:53:57.398670360Z" level=info msg="Container 959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:57.459590 containerd[1553]: time="2026-03-03T13:53:57.459163893Z" level=info msg="CreateContainer within sandbox \"bd6866b4e4655a73b2219f8456e13051ee532a12956167acc121dce099ba8138\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d\"" Mar 3 13:53:57.461499 containerd[1553]: time="2026-03-03T13:53:57.461434238Z" level=info msg="StartContainer for \"959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d\"" Mar 3 13:53:57.471240 containerd[1553]: time="2026-03-03T13:53:57.470691539Z" level=info msg="connecting to shim 959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d" address="unix:///run/containerd/s/0363ae13cdbae4766bcb4686bb61cf4c81f2c81c6ddaabb73589752119fc9894" protocol=ttrpc version=3 Mar 3 13:53:57.554642 systemd[1]: Started cri-containerd-959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d.scope - libcontainer container 959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d. Mar 3 13:53:57.764752 containerd[1553]: time="2026-03-03T13:53:57.764399388Z" level=info msg="StartContainer for \"959e1dbb1d45bdaae0a409426fb9d0a0c6a0dc129b3732e3e2bee30d455d8b2d\" returns successfully" Mar 3 13:53:58.346416 kubelet[2706]: E0303 13:53:58.346212 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:59.297653 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 3 13:53:59.380598 kubelet[2706]: E0303 13:53:59.380337 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:59.471838 containerd[1553]: time="2026-03-03T13:53:59.470369075Z" level=warning msg="container event discarded" container=97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21 type=CONTAINER_CREATED_EVENT Mar 3 13:53:59.880224 containerd[1553]: time="2026-03-03T13:53:59.879359416Z" level=warning msg="container event discarded" container=97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21 type=CONTAINER_STARTED_EVENT Mar 3 13:54:00.127674 containerd[1553]: time="2026-03-03T13:54:00.125587992Z" level=warning msg="container event discarded" container=97415b473779df1bc8607a6a5286b44637084e45fa2ab71aba9c3fa1daa82a21 type=CONTAINER_STOPPED_EVENT Mar 3 13:54:00.397814 kubelet[2706]: E0303 13:54:00.397657 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:00.954354 containerd[1553]: time="2026-03-03T13:54:00.943874275Z" level=warning msg="container event discarded" container=3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda type=CONTAINER_CREATED_EVENT Mar 3 13:54:01.039378 containerd[1553]: time="2026-03-03T13:54:01.039206919Z" level=warning msg="container event discarded" container=3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda type=CONTAINER_STARTED_EVENT Mar 3 13:54:01.168641 containerd[1553]: time="2026-03-03T13:54:01.165567309Z" level=warning msg="container event discarded" container=3ff51058d6517cadd70293e8066a97c18dd6e2f06d16305fe4b2bd4fca070bda type=CONTAINER_STOPPED_EVENT Mar 3 13:54:01.555647 kubelet[2706]: E0303 13:54:01.552490 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:01.965598 containerd[1553]: time="2026-03-03T13:54:01.965181508Z" level=warning msg="container event discarded" container=5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0 type=CONTAINER_CREATED_EVENT Mar 3 13:54:02.158512 containerd[1553]: time="2026-03-03T13:54:02.158400841Z" level=warning msg="container event discarded" container=5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0 type=CONTAINER_STARTED_EVENT Mar 3 13:54:02.231578 containerd[1553]: time="2026-03-03T13:54:02.231309576Z" level=warning msg="container event discarded" container=5e5028f42c900b29e36ba817085bd627902af2f7f9d5008d9a27ab3675c820d0 type=CONTAINER_STOPPED_EVENT Mar 3 13:54:04.359189 containerd[1553]: time="2026-03-03T13:54:04.357627138Z" level=warning msg="container event discarded" container=b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b type=CONTAINER_CREATED_EVENT Mar 3 13:54:05.683851 containerd[1553]: time="2026-03-03T13:54:05.683719933Z" level=warning msg="container event discarded" container=b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b type=CONTAINER_STARTED_EVENT Mar 3 13:54:06.539458 containerd[1553]: time="2026-03-03T13:54:06.538698839Z" level=warning msg="container event discarded" container=b84ec1b0347b9ad41da242f98c922c7c744ebf583240b18242fc9ba440a5107b type=CONTAINER_STOPPED_EVENT Mar 3 13:54:06.917872 containerd[1553]: time="2026-03-03T13:54:06.917719703Z" level=warning msg="container event discarded" container=d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed type=CONTAINER_CREATED_EVENT Mar 3 13:54:07.212337 containerd[1553]: time="2026-03-03T13:54:07.210818813Z" level=warning msg="container event discarded" container=d593133299325dfa06eca4e2c40028a9d3467d524ef4ec186e62527964dfe6ed type=CONTAINER_STARTED_EVENT Mar 3 13:54:08.462428 containerd[1553]: time="2026-03-03T13:54:08.432902139Z" level=warning msg="container event discarded" container=914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68 type=CONTAINER_CREATED_EVENT Mar 3 13:54:08.886568 containerd[1553]: time="2026-03-03T13:54:08.883520358Z" level=warning msg="container event discarded" container=914fcb9bd19046b967dfdf7756220596d43bcaaf1136a112f8ccaed2bf61af68 type=CONTAINER_STARTED_EVENT Mar 3 13:54:09.853306 systemd-networkd[1461]: lxc_health: Link UP Mar 3 13:54:09.856355 systemd-networkd[1461]: lxc_health: Gained carrier Mar 3 13:54:10.192358 kubelet[2706]: E0303 13:54:10.191059 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:10.306882 kubelet[2706]: E0303 13:54:10.306642 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:10.604253 kubelet[2706]: I0303 13:54:10.602848 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qf7bs" podStartSLOduration=21.602784064 podStartE2EDuration="21.602784064s" podCreationTimestamp="2026-03-03 13:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:53:58.418803691 +0000 UTC m=+322.014533858" watchObservedRunningTime="2026-03-03 13:54:10.602784064 +0000 UTC m=+334.198514211" Mar 3 13:54:11.073504 systemd-networkd[1461]: lxc_health: Gained IPv6LL Mar 3 13:54:11.316358 kubelet[2706]: E0303 13:54:11.316304 2706 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:15.124200 sshd[5071]: Connection closed by 10.0.0.1 port 54000 Mar 3 13:54:15.122923 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Mar 3 13:54:15.130733 systemd[1]: sshd@54-10.0.0.100:22-10.0.0.1:54000.service: Deactivated successfully. Mar 3 13:54:15.137332 systemd[1]: session-55.scope: Deactivated successfully. Mar 3 13:54:15.148281 systemd-logind[1535]: Session 55 logged out. Waiting for processes to exit. Mar 3 13:54:15.154378 systemd-logind[1535]: Removed session 55.