Mar 7 01:09:37.094688 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 6 22:58:19 -00 2026 Mar 7 01:09:37.094722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:09:37.094738 kernel: BIOS-provided physical RAM map: Mar 7 01:09:37.094748 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 7 01:09:37.094756 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 7 01:09:37.094767 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 7 01:09:37.094777 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 7 01:09:37.094785 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 7 01:09:37.094795 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 7 01:09:37.094808 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 7 01:09:37.094817 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 7 01:09:37.094825 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 7 01:09:37.094896 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 7 01:09:37.094986 kernel: NX (Execute Disable) protection: active Mar 7 01:09:37.095000 kernel: APIC: Static calls initialized Mar 7 01:09:37.095051 kernel: SMBIOS 2.8 present. Mar 7 01:09:37.095061 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 7 01:09:37.095070 kernel: Hypervisor detected: KVM Mar 7 01:09:37.095080 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 7 01:09:37.095089 kernel: kvm-clock: using sched offset of 25532897714 cycles Mar 7 01:09:37.095099 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 7 01:09:37.095108 kernel: tsc: Detected 2445.426 MHz processor Mar 7 01:09:37.095118 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 7 01:09:37.095128 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 7 01:09:37.095142 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 7 01:09:37.095152 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 7 01:09:37.095161 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 7 01:09:37.095171 kernel: Using GB pages for direct mapping Mar 7 01:09:37.095180 kernel: ACPI: Early table checksum verification disabled Mar 7 01:09:37.095191 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 7 01:09:37.095203 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.095212 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.095221 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.095239 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 7 01:09:37.095249 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.095258 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.099990 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.100008 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 01:09:37.100019 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 7 01:09:37.100029 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 7 01:09:37.100052 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 7 01:09:37.100066 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 7 01:09:37.100077 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 7 01:09:37.100087 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 7 01:09:37.100097 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 7 01:09:37.100142 kernel: No NUMA configuration found Mar 7 01:09:37.100154 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 7 01:09:37.100169 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 7 01:09:37.100179 kernel: Zone ranges: Mar 7 01:09:37.100190 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 7 01:09:37.100200 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 7 01:09:37.100210 kernel: Normal empty Mar 7 01:09:37.100220 kernel: Movable zone start for each node Mar 7 01:09:37.100230 kernel: Early memory node ranges Mar 7 01:09:37.100240 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 7 01:09:37.100250 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 7 01:09:37.100260 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 7 01:09:37.100326 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 7 01:09:37.100372 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 7 01:09:37.100383 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 7 01:09:37.100393 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 7 01:09:37.100403 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 7 01:09:37.100414 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 7 01:09:37.100424 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 7 01:09:37.100434 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 7 01:09:37.100444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 7 01:09:37.100459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 7 01:09:37.100469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 7 01:09:37.100479 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 7 01:09:37.100490 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 7 01:09:37.100500 kernel: TSC deadline timer available Mar 7 01:09:37.100510 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 7 01:09:37.100520 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 7 01:09:37.100530 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 7 01:09:37.102491 kernel: kvm-guest: setup PV sched yield Mar 7 01:09:37.102524 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 7 01:09:37.102535 kernel: Booting paravirtualized kernel on KVM Mar 7 01:09:37.102548 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 7 01:09:37.102562 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 7 01:09:37.102618 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 7 01:09:37.102632 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 7 01:09:37.102642 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 7 01:09:37.102652 kernel: kvm-guest: PV spinlocks enabled Mar 7 01:09:37.102662 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 7 01:09:37.102680 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:09:37.102691 kernel: random: crng init done Mar 7 01:09:37.102702 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 01:09:37.103202 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 01:09:37.103216 kernel: Fallback order for Node 0: 0 Mar 7 01:09:37.103227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 7 01:09:37.103237 kernel: Policy zone: DMA32 Mar 7 01:09:37.103248 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 01:09:37.104008 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 7 01:09:37.104023 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 7 01:09:37.104034 kernel: ftrace: allocating 37996 entries in 149 pages Mar 7 01:09:37.104045 kernel: ftrace: allocated 149 pages with 4 groups Mar 7 01:09:37.104055 kernel: Dynamic Preempt: voluntary Mar 7 01:09:37.104065 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 01:09:37.104086 kernel: rcu: RCU event tracing is enabled. Mar 7 01:09:37.104142 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 7 01:09:37.104154 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 01:09:37.104171 kernel: Rude variant of Tasks RCU enabled. Mar 7 01:09:37.104181 kernel: Tracing variant of Tasks RCU enabled. Mar 7 01:09:37.104192 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 01:09:37.104202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 7 01:09:37.104250 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 7 01:09:37.104304 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 01:09:37.104317 kernel: Console: colour VGA+ 80x25 Mar 7 01:09:37.104327 kernel: printk: console [ttyS0] enabled Mar 7 01:09:37.104337 kernel: ACPI: Core revision 20230628 Mar 7 01:09:37.104353 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 7 01:09:37.104363 kernel: APIC: Switch to symmetric I/O mode setup Mar 7 01:09:37.104373 kernel: x2apic enabled Mar 7 01:09:37.104383 kernel: APIC: Switched APIC routing to: physical x2apic Mar 7 01:09:37.104395 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 7 01:09:37.104407 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 7 01:09:37.104417 kernel: kvm-guest: setup PV IPIs Mar 7 01:09:37.104427 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 7 01:09:37.104453 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 7 01:09:37.104464 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 7 01:09:37.104475 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 7 01:09:37.104486 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 7 01:09:37.104500 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 7 01:09:37.104511 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 7 01:09:37.106375 kernel: Spectre V2 : Mitigation: Retpolines Mar 7 01:09:37.106391 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 7 01:09:37.106402 kernel: Speculative Store Bypass: Vulnerable Mar 7 01:09:37.106421 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 7 01:09:37.106466 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 7 01:09:37.106477 kernel: active return thunk: srso_alias_return_thunk Mar 7 01:09:37.106488 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 7 01:09:37.106499 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 7 01:09:37.106510 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 7 01:09:37.106521 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 7 01:09:37.106532 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 7 01:09:37.106547 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 7 01:09:37.106593 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 7 01:09:37.106608 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 7 01:09:37.106619 kernel: Freeing SMP alternatives memory: 32K Mar 7 01:09:37.106629 kernel: pid_max: default: 32768 minimum: 301 Mar 7 01:09:37.106640 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 01:09:37.106651 kernel: landlock: Up and running. Mar 7 01:09:37.106662 kernel: SELinux: Initializing. Mar 7 01:09:37.106674 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:09:37.106690 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 01:09:37.106743 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 7 01:09:37.106756 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:09:37.106767 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:09:37.106778 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 7 01:09:37.106789 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 7 01:09:37.106800 kernel: signal: max sigframe size: 1776 Mar 7 01:09:37.106881 kernel: rcu: Hierarchical SRCU implementation. Mar 7 01:09:37.106897 kernel: rcu: Max phase no-delay instances is 400. Mar 7 01:09:37.106991 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 7 01:09:37.107003 kernel: smp: Bringing up secondary CPUs ... Mar 7 01:09:37.107015 kernel: smpboot: x86: Booting SMP configuration: Mar 7 01:09:37.107026 kernel: .... node #0, CPUs: #1 #2 #3 Mar 7 01:09:37.107038 kernel: smp: Brought up 1 node, 4 CPUs Mar 7 01:09:37.107050 kernel: smpboot: Max logical packages: 1 Mar 7 01:09:37.107062 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 7 01:09:37.107074 kernel: devtmpfs: initialized Mar 7 01:09:37.107086 kernel: x86/mm: Memory block size: 128MB Mar 7 01:09:37.107105 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 01:09:37.107119 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 7 01:09:37.107131 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 01:09:37.107143 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 01:09:37.107154 kernel: audit: initializing netlink subsys (disabled) Mar 7 01:09:37.107165 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 01:09:37.107176 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 7 01:09:37.107187 kernel: audit: type=2000 audit(1772845756.486:1): state=initialized audit_enabled=0 res=1 Mar 7 01:09:37.107198 kernel: cpuidle: using governor menu Mar 7 01:09:37.107216 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 01:09:37.107229 kernel: dca service started, version 1.12.1 Mar 7 01:09:37.107242 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 7 01:09:37.107255 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 7 01:09:37.107317 kernel: PCI: Using configuration type 1 for base access Mar 7 01:09:37.107331 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 7 01:09:37.107342 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 01:09:37.107353 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 01:09:37.107364 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 01:09:37.107381 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 01:09:37.107392 kernel: ACPI: Added _OSI(Module Device) Mar 7 01:09:37.107403 kernel: ACPI: Added _OSI(Processor Device) Mar 7 01:09:37.107414 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 01:09:37.107426 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 01:09:37.107436 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 7 01:09:37.107447 kernel: ACPI: Interpreter enabled Mar 7 01:09:37.107458 kernel: ACPI: PM: (supports S0 S3 S5) Mar 7 01:09:37.107469 kernel: ACPI: Using IOAPIC for interrupt routing Mar 7 01:09:37.107486 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 7 01:09:37.107499 kernel: PCI: Using E820 reservations for host bridge windows Mar 7 01:09:37.107511 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 7 01:09:37.107524 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 01:09:37.113635 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 01:09:37.116774 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 7 01:09:37.119619 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 7 01:09:37.119660 kernel: PCI host bridge to bus 0000:00 Mar 7 01:09:37.127700 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 7 01:09:37.130459 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 7 01:09:37.130689 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 7 01:09:37.130991 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 7 01:09:37.131194 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 7 01:09:37.131435 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 7 01:09:37.131645 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 01:09:37.132083 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 7 01:09:37.140818 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 7 01:09:37.141147 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 7 01:09:37.145054 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 7 01:09:37.148654 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 7 01:09:37.148904 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 7 01:09:37.149590 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 7 01:09:37.153123 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 7 01:09:37.154206 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 7 01:09:37.154502 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 7 01:09:37.181716 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 7 01:09:37.182100 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 7 01:09:37.198985 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 7 01:09:37.204781 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 7 01:09:37.205250 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 7 01:09:37.210084 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 7 01:09:37.212059 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 7 01:09:37.212389 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 7 01:09:37.212632 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 7 01:09:37.213090 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 7 01:09:37.214168 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 7 01:09:37.214429 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 22460 usecs Mar 7 01:09:37.214708 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 7 01:09:37.217189 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 7 01:09:37.217486 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 7 01:09:37.221986 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 7 01:09:37.223453 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 7 01:09:37.223485 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 7 01:09:37.223498 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 7 01:09:37.223509 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 7 01:09:37.223520 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 7 01:09:37.223530 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 7 01:09:37.223541 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 7 01:09:37.223552 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 7 01:09:37.223572 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 7 01:09:37.223584 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 7 01:09:37.223598 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 7 01:09:37.223612 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 7 01:09:37.223625 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 7 01:09:37.223636 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 7 01:09:37.223649 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 7 01:09:37.223663 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 7 01:09:37.223674 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 7 01:09:37.223690 kernel: iommu: Default domain type: Translated Mar 7 01:09:37.223701 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 7 01:09:37.223712 kernel: PCI: Using ACPI for IRQ routing Mar 7 01:09:37.223722 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 7 01:09:37.223733 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 7 01:09:37.223744 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 7 01:09:37.224079 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 7 01:09:37.226510 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 7 01:09:37.226741 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 7 01:09:37.226769 kernel: vgaarb: loaded Mar 7 01:09:37.226782 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 7 01:09:37.226792 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 7 01:09:37.226803 kernel: clocksource: Switched to clocksource kvm-clock Mar 7 01:09:37.226815 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 01:09:37.226828 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 01:09:37.226841 kernel: pnp: PnP ACPI init Mar 7 01:09:37.228182 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 7 01:09:37.228217 kernel: pnp: PnP ACPI: found 6 devices Mar 7 01:09:37.228232 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 7 01:09:37.228243 kernel: NET: Registered PF_INET protocol family Mar 7 01:09:37.228256 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 01:09:37.228329 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 01:09:37.228343 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 01:09:37.228356 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 01:09:37.228367 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 01:09:37.228379 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 01:09:37.228400 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:09:37.228412 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 01:09:37.228423 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 01:09:37.228434 kernel: NET: Registered PF_XDP protocol family Mar 7 01:09:37.229830 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 7 01:09:37.230158 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 7 01:09:37.230418 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 7 01:09:37.241392 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 7 01:09:37.241762 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 7 01:09:37.242027 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 7 01:09:37.242045 kernel: PCI: CLS 0 bytes, default 64 Mar 7 01:09:37.242057 kernel: Initialise system trusted keyrings Mar 7 01:09:37.242069 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 01:09:37.242080 kernel: Key type asymmetric registered Mar 7 01:09:37.242091 kernel: Asymmetric key parser 'x509' registered Mar 7 01:09:37.242101 kernel: hrtimer: interrupt took 5622027 ns Mar 7 01:09:37.242112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 7 01:09:37.242130 kernel: io scheduler mq-deadline registered Mar 7 01:09:37.242141 kernel: io scheduler kyber registered Mar 7 01:09:37.242155 kernel: io scheduler bfq registered Mar 7 01:09:37.242165 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 7 01:09:37.242177 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 7 01:09:37.242188 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 7 01:09:37.242199 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 7 01:09:37.242210 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 01:09:37.242221 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 7 01:09:37.242236 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 7 01:09:37.242247 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 7 01:09:37.242258 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 7 01:09:37.246182 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 7 01:09:37.246434 kernel: rtc_cmos 00:04: registered as rtc0 Mar 7 01:09:37.246610 kernel: rtc_cmos 00:04: setting system clock to 2026-03-07T01:09:32 UTC (1772845772) Mar 7 01:09:37.246626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 7 01:09:37.246798 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 7 01:09:37.246822 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 7 01:09:37.246834 kernel: NET: Registered PF_INET6 protocol family Mar 7 01:09:37.246844 kernel: Segment Routing with IPv6 Mar 7 01:09:37.246856 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 01:09:37.246869 kernel: NET: Registered PF_PACKET protocol family Mar 7 01:09:37.246883 kernel: Key type dns_resolver registered Mar 7 01:09:37.246897 kernel: IPI shorthand broadcast: enabled Mar 7 01:09:37.246995 kernel: sched_clock: Marking stable (13432098206, 2043782468)->(18032482347, -2556601673) Mar 7 01:09:37.247008 kernel: registered taskstats version 1 Mar 7 01:09:37.247025 kernel: Loading compiled-in X.509 certificates Mar 7 01:09:37.247035 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: da286e6f6c247ee6f65a875c513de7da57782e90' Mar 7 01:09:37.247047 kernel: Key type .fscrypt registered Mar 7 01:09:37.247058 kernel: Key type fscrypt-provisioning registered Mar 7 01:09:37.247070 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 01:09:37.247084 kernel: ima: Allocated hash algorithm: sha1 Mar 7 01:09:37.247097 kernel: ima: No architecture policies found Mar 7 01:09:37.247109 kernel: clk: Disabling unused clocks Mar 7 01:09:37.247120 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 7 01:09:37.247136 kernel: Write protecting the kernel read-only data: 36864k Mar 7 01:09:37.247147 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 7 01:09:37.247158 kernel: Run /init as init process Mar 7 01:09:37.247168 kernel: with arguments: Mar 7 01:09:37.247179 kernel: /init Mar 7 01:09:37.247190 kernel: with environment: Mar 7 01:09:37.247202 kernel: HOME=/ Mar 7 01:09:37.247214 kernel: TERM=linux Mar 7 01:09:37.247229 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:09:37.247253 systemd[1]: Detected virtualization kvm. Mar 7 01:09:37.250486 systemd[1]: Detected architecture x86-64. Mar 7 01:09:37.250507 systemd[1]: Running in initrd. Mar 7 01:09:37.250520 systemd[1]: No hostname configured, using default hostname. Mar 7 01:09:37.250531 systemd[1]: Hostname set to . Mar 7 01:09:37.250543 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:09:37.250554 systemd[1]: Queued start job for default target initrd.target. Mar 7 01:09:37.250573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:09:37.251488 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:09:37.251507 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 01:09:37.251520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:09:37.251532 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 01:09:37.251545 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 01:09:37.251559 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 01:09:37.251578 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 01:09:37.251590 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:09:37.251601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:09:37.251611 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:09:37.251640 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:09:37.251654 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:09:37.251669 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:09:37.251679 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:09:37.251690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:09:37.251701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 01:09:37.251713 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 01:09:37.251725 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:09:37.251737 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:09:37.251748 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:09:37.251763 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:09:37.251775 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 01:09:37.251788 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:09:37.251800 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 01:09:37.251812 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 01:09:37.251824 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:09:37.251837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:09:37.251857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:37.251871 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 01:09:37.252027 systemd-journald[194]: Collecting audit messages is disabled. Mar 7 01:09:37.252067 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:09:37.252085 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 01:09:37.252108 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:09:37.252124 systemd-journald[194]: Journal started Mar 7 01:09:37.252148 systemd-journald[194]: Runtime Journal (/run/log/journal/5bcc9c3ea6464eb381d89b4757f3b9b1) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:09:37.293474 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:09:37.403542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:09:37.431049 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:09:37.454658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:09:38.431655 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 01:09:38.431754 kernel: Bridge firewalling registered Mar 7 01:09:37.606190 systemd-modules-load[195]: Inserted module 'overlay' Mar 7 01:09:37.910577 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 7 01:09:38.454386 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:09:38.508334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:38.676242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:38.828493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:09:38.847265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:09:38.898747 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:09:39.040644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:39.114550 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:09:39.215238 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 01:09:39.297778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:09:39.391262 dracut-cmdline[231]: dracut-dracut-053 Mar 7 01:09:39.420214 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=531e046a631dbba7b4aae1b7955ffa961f5ce7d570e89a624d767cf739ab70b5 Mar 7 01:09:39.635153 systemd-resolved[232]: Positive Trust Anchors: Mar 7 01:09:39.635215 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:09:39.637082 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:09:39.671984 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 7 01:09:39.685813 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:09:39.699829 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:09:40.452371 kernel: SCSI subsystem initialized Mar 7 01:09:40.538516 kernel: Loading iSCSI transport class v2.0-870. Mar 7 01:09:40.820987 kernel: iscsi: registered transport (tcp) Mar 7 01:09:41.053078 kernel: iscsi: registered transport (qla4xxx) Mar 7 01:09:41.053174 kernel: QLogic iSCSI HBA Driver Mar 7 01:09:41.639744 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 01:09:41.686550 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 01:09:41.833450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 01:09:41.833537 kernel: device-mapper: uevent: version 1.0.3 Mar 7 01:09:41.845399 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 01:09:42.256148 kernel: raid6: avx2x4 gen() 13267 MB/s Mar 7 01:09:42.283073 kernel: raid6: avx2x2 gen() 4998 MB/s Mar 7 01:09:42.333406 kernel: raid6: avx2x1 gen() 4952 MB/s Mar 7 01:09:42.333489 kernel: raid6: using algorithm avx2x4 gen() 13267 MB/s Mar 7 01:09:42.365200 kernel: raid6: .... xor() 1944 MB/s, rmw enabled Mar 7 01:09:42.366059 kernel: raid6: using avx2x2 recovery algorithm Mar 7 01:09:42.621112 kernel: xor: automatically using best checksumming function avx Mar 7 01:09:44.036680 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 01:09:44.124657 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:09:44.206158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:09:44.283403 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 7 01:09:44.309736 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:09:44.334144 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 01:09:44.475320 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Mar 7 01:09:44.740771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:09:44.804683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:09:45.057881 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:09:45.111610 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 01:09:45.248718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 01:09:45.317501 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:09:45.389212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:09:45.431649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:09:45.618085 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 01:09:45.731834 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:09:45.832264 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:09:45.833702 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:45.918888 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:45.946741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:09:45.948698 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:45.983595 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:46.228849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:09:46.319058 kernel: cryptd: max_cpu_qlen set to 1000 Mar 7 01:09:46.319124 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 7 01:09:46.402480 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 7 01:09:46.450058 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 01:09:46.450159 kernel: GPT:9289727 != 19775487 Mar 7 01:09:46.450182 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 01:09:46.450201 kernel: GPT:9289727 != 19775487 Mar 7 01:09:46.450217 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 01:09:46.450235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:09:47.278098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:09:47.410245 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 01:09:47.547850 kernel: libata version 3.00 loaded. Mar 7 01:09:47.759764 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:09:47.945044 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 7 01:09:48.008390 kernel: BTRFS: device fsid 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (461) Mar 7 01:09:48.008440 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Mar 7 01:09:48.038623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 7 01:09:48.076994 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 7 01:09:48.085529 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 7 01:09:48.183581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:09:48.254054 kernel: AVX2 version of gcm_enc/dec engaged. Mar 7 01:09:48.254142 kernel: ahci 0000:00:1f.2: version 3.0 Mar 7 01:09:48.255352 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 7 01:09:48.286216 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 01:09:48.399388 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 7 01:09:48.399728 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 7 01:09:48.400099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:09:48.400125 kernel: scsi host0: ahci Mar 7 01:09:48.403720 kernel: scsi host1: ahci Mar 7 01:09:48.404187 disk-uuid[512]: Primary Header is updated. Mar 7 01:09:48.404187 disk-uuid[512]: Secondary Entries is updated. Mar 7 01:09:48.404187 disk-uuid[512]: Secondary Header is updated. Mar 7 01:09:48.474781 kernel: scsi host2: ahci Mar 7 01:09:48.475286 kernel: scsi host3: ahci Mar 7 01:09:48.476669 kernel: scsi host4: ahci Mar 7 01:09:48.477070 kernel: scsi host5: ahci Mar 7 01:09:48.484626 kernel: AES CTR mode by8 optimization enabled Mar 7 01:09:48.484663 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 7 01:09:48.484679 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 7 01:09:48.484692 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:09:48.487140 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 7 01:09:48.487186 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 7 01:09:48.487207 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 7 01:09:48.487224 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 7 01:09:48.639833 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 7 01:09:48.830057 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 7 01:09:48.847905 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 7 01:09:48.867019 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 7 01:09:48.867103 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 7 01:09:48.890233 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 7 01:09:48.895416 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 7 01:09:48.905760 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 7 01:09:48.913450 kernel: ata3.00: applying bridge limits Mar 7 01:09:48.930840 kernel: ata3.00: configured for UDMA/100 Mar 7 01:09:48.968744 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 01:09:49.618625 disk-uuid[513]: Warning: The kernel is still using the old partition table. Mar 7 01:09:49.618625 disk-uuid[513]: The new table will be used at the next reboot or after you Mar 7 01:09:49.618625 disk-uuid[513]: run partprobe(8) or kpartx(8) Mar 7 01:09:49.618625 disk-uuid[513]: The operation has completed successfully. Mar 7 01:09:49.841211 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 7 01:09:49.844402 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 01:09:49.917579 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 7 01:09:51.291832 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 01:09:51.296104 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 01:09:51.393814 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 01:09:51.450441 sh[591]: Success Mar 7 01:09:51.879757 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 7 01:09:52.206885 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 01:09:52.334678 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 01:09:52.391425 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 01:09:52.525235 kernel: BTRFS info (device dm-0): first mount of filesystem 3bed8db9-42ad-4483-9cc8-1ad17a6cd948 Mar 7 01:09:52.526044 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:52.543835 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 01:09:52.544752 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 01:09:52.574437 kernel: BTRFS info (device dm-0): using free space tree Mar 7 01:09:52.701274 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 01:09:52.713456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 01:09:52.794702 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 01:09:52.850091 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 01:09:52.985568 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:52.985607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:09:52.985622 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:09:52.985637 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:09:53.094653 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 01:09:53.130355 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:09:53.232663 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 01:09:53.310077 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 01:09:55.832142 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:09:55.879378 ignition[679]: Ignition 2.19.0 Mar 7 01:09:55.879468 ignition[679]: Stage: fetch-offline Mar 7 01:09:55.879688 ignition[679]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:55.879729 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:09:55.884300 ignition[679]: parsed url from cmdline: "" Mar 7 01:09:55.884348 ignition[679]: no config URL provided Mar 7 01:09:55.884361 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 01:09:55.884381 ignition[679]: no config at "/usr/lib/ignition/user.ign" Mar 7 01:09:55.884551 ignition[679]: op(1): [started] loading QEMU firmware config module Mar 7 01:09:55.884560 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 7 01:09:55.993480 ignition[679]: op(1): [finished] loading QEMU firmware config module Mar 7 01:09:56.249237 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:09:56.915570 systemd-networkd[779]: lo: Link UP Mar 7 01:09:56.915773 systemd-networkd[779]: lo: Gained carrier Mar 7 01:09:56.929887 systemd-networkd[779]: Enumeration completed Mar 7 01:09:56.932533 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:56.932540 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:09:56.935192 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:09:56.947467 systemd-networkd[779]: eth0: Link UP Mar 7 01:09:56.947476 systemd-networkd[779]: eth0: Gained carrier Mar 7 01:09:56.947496 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:09:56.949015 systemd[1]: Reached target network.target - Network. Mar 7 01:09:57.121113 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:09:57.408849 ignition[679]: parsing config with SHA512: 97ad7d6b54a43fd9d8d5a71a7a7e74a9c3dc3da22a9733ee140742148eaa5a23237049421ff6498b5ea8bde88d0bd2a3effd1524c04d7677ca598543903358d6 Mar 7 01:09:57.469547 unknown[679]: fetched base config from "system" Mar 7 01:09:57.469573 unknown[679]: fetched user config from "qemu" Mar 7 01:09:57.493645 ignition[679]: fetch-offline: fetch-offline passed Mar 7 01:09:57.926669 ignition[679]: Ignition finished successfully Mar 7 01:09:57.937526 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:09:57.942050 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 7 01:09:57.986678 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 01:09:58.224262 ignition[783]: Ignition 2.19.0 Mar 7 01:09:58.224368 ignition[783]: Stage: kargs Mar 7 01:09:58.224697 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:58.224716 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:09:58.250526 ignition[783]: kargs: kargs passed Mar 7 01:09:58.250742 ignition[783]: Ignition finished successfully Mar 7 01:09:58.350285 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 01:09:58.488675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 01:09:58.690860 ignition[792]: Ignition 2.19.0 Mar 7 01:09:58.703110 ignition[792]: Stage: disks Mar 7 01:09:58.711090 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 7 01:09:58.711117 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:09:58.736992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 01:09:58.719441 ignition[792]: disks: disks passed Mar 7 01:09:58.719551 ignition[792]: Ignition finished successfully Mar 7 01:09:58.804476 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 01:09:58.833654 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 01:09:58.876420 systemd-networkd[779]: eth0: Gained IPv6LL Mar 7 01:09:58.889781 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:09:58.920721 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:09:58.950414 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:09:59.139662 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 01:09:59.338056 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 01:09:59.377470 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 01:09:59.484133 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 01:10:00.354514 kernel: EXT4-fs (vda9): mounted filesystem aab0506b-de72-4dd2-9393-24d7958f49a5 r/w with ordered data mode. Quota mode: none. Mar 7 01:10:00.377208 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 01:10:00.398382 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 01:10:00.453261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:10:00.526421 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 01:10:00.547227 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 01:10:00.547374 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 01:10:00.687752 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Mar 7 01:10:00.687798 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:00.687819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:00.687836 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:10:00.547435 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:10:00.711494 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:10:00.725651 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:10:00.743395 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 01:10:00.811660 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 01:10:01.093545 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 01:10:01.155863 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Mar 7 01:10:01.213276 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 01:10:01.267716 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 01:10:02.141753 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 01:10:02.215626 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 01:10:02.270733 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 01:10:02.308604 kernel: BTRFS info (device vda6): last unmount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:02.343133 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 01:10:02.675902 ignition[923]: INFO : Ignition 2.19.0 Mar 7 01:10:02.675902 ignition[923]: INFO : Stage: mount Mar 7 01:10:02.701282 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:02.701282 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:10:02.701282 ignition[923]: INFO : mount: mount passed Mar 7 01:10:02.701282 ignition[923]: INFO : Ignition finished successfully Mar 7 01:10:02.696708 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 01:10:02.787054 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 01:10:02.812880 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 01:10:02.961124 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 01:10:03.072179 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Mar 7 01:10:03.101168 kernel: BTRFS info (device vda6): first mount of filesystem 872bf425-12c9-4ef2-aaf0-71379b3513d9 Mar 7 01:10:03.101385 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 7 01:10:03.110456 kernel: BTRFS info (device vda6): using free space tree Mar 7 01:10:03.194296 kernel: BTRFS info (device vda6): auto enabling async discard Mar 7 01:10:03.213742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 01:10:03.612589 ignition[954]: INFO : Ignition 2.19.0 Mar 7 01:10:03.612589 ignition[954]: INFO : Stage: files Mar 7 01:10:03.681244 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:03.681244 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:10:03.681244 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Mar 7 01:10:03.681244 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 01:10:03.681244 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 01:10:03.815304 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 01:10:03.815304 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 01:10:03.815304 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 01:10:03.815304 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:10:03.815304 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 7 01:10:03.785608 unknown[954]: wrote ssh authorized keys file for user: core Mar 7 01:10:04.198028 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 01:10:04.950274 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 7 01:10:04.950274 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:10:04.950274 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 7 01:10:05.430766 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 01:10:08.504801 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 7 01:10:08.504801 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 01:10:08.504801 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 01:10:08.504801 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:08.654603 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 7 01:10:09.043175 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 01:10:25.613311 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 7 01:10:25.613311 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 7 01:10:25.705507 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 7 01:10:25.753885 ignition[954]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 7 01:10:26.293329 ignition[954]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:10:26.395121 ignition[954]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 7 01:10:26.395121 ignition[954]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 7 01:10:26.395121 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 7 01:10:26.395121 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 01:10:26.395121 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:10:26.635625 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 01:10:26.635625 ignition[954]: INFO : files: files passed Mar 7 01:10:26.635625 ignition[954]: INFO : Ignition finished successfully Mar 7 01:10:26.602287 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 01:10:26.799682 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 01:10:26.841333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 01:10:26.997650 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Mar 7 01:10:27.023594 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 01:10:27.023805 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 01:10:27.108786 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:27.108786 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:27.164599 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 01:10:27.131536 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:10:27.232590 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 01:10:27.295839 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 01:10:27.790700 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 01:10:27.795567 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 01:10:27.912321 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 01:10:27.948290 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 01:10:28.010269 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 01:10:28.134016 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 01:10:28.359680 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:10:28.494098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 01:10:28.598665 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:10:28.612637 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:28.638281 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 01:10:28.708337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 01:10:28.708647 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 01:10:28.738018 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 01:10:28.769998 systemd[1]: Stopped target basic.target - Basic System. Mar 7 01:10:28.816119 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 01:10:28.860091 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 01:10:28.982128 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 01:10:29.009008 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 01:10:29.082860 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 01:10:29.127794 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 01:10:29.201119 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 01:10:29.223612 systemd[1]: Stopped target swap.target - Swaps. Mar 7 01:10:29.279756 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 01:10:29.298605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 01:10:29.369349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:10:29.392260 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:29.472480 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 01:10:29.473654 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:29.538634 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 01:10:29.543868 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 01:10:29.586665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 01:10:29.586883 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 01:10:29.605185 systemd[1]: Stopped target paths.target - Path Units. Mar 7 01:10:29.614689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 01:10:29.641728 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:29.698582 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 01:10:29.713247 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 01:10:29.717842 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 01:10:29.718155 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 01:10:29.735855 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 01:10:29.736278 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 01:10:29.769694 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 01:10:29.772099 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 01:10:29.815694 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 01:10:29.816045 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 01:10:29.905496 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 01:10:29.960506 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 01:10:29.999348 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 01:10:30.023856 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:30.068261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 01:10:30.077537 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 01:10:30.109291 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 01:10:30.129330 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 01:10:30.129567 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 01:10:30.439538 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 01:10:30.439792 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 01:10:30.638023 ignition[1008]: INFO : Ignition 2.19.0 Mar 7 01:10:30.638023 ignition[1008]: INFO : Stage: umount Mar 7 01:10:30.664053 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 01:10:30.664053 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 7 01:10:30.664053 ignition[1008]: INFO : umount: umount passed Mar 7 01:10:30.664053 ignition[1008]: INFO : Ignition finished successfully Mar 7 01:10:30.699298 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 01:10:30.699784 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 01:10:30.714671 systemd[1]: Stopped target network.target - Network. Mar 7 01:10:30.796346 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 01:10:30.796712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 01:10:30.834558 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 01:10:30.834780 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 01:10:30.835033 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 01:10:30.835127 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 01:10:30.835249 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 01:10:30.835330 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 01:10:30.842312 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 01:10:30.842519 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 01:10:31.119168 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 01:10:31.130796 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 01:10:31.179364 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 7 01:10:31.188052 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 01:10:31.188349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 01:10:31.253822 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 01:10:31.254182 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 01:10:31.310582 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 01:10:31.310705 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:31.420663 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 01:10:31.440632 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 01:10:31.440766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 01:10:31.466160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:10:31.466277 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:31.504292 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 01:10:31.504387 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:31.622372 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 01:10:31.622584 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:31.666716 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:31.769565 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 01:10:31.769862 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:31.799554 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 01:10:31.799697 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:31.848379 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 01:10:31.848559 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:31.848679 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 01:10:32.354364 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 7 01:10:31.848773 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 01:10:31.849140 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 01:10:31.849211 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 01:10:31.849366 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 01:10:31.849508 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 01:10:31.874288 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 01:10:31.879548 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 01:10:31.879674 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:31.879821 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 01:10:31.879900 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:10:31.880111 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 01:10:31.880195 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:31.880302 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 01:10:31.880456 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:10:31.881234 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 01:10:31.881459 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 01:10:31.940356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 01:10:31.941086 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 01:10:31.948788 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 01:10:31.985278 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 01:10:32.092747 systemd[1]: Switching root. Mar 7 01:10:33.130018 systemd-journald[194]: Journal stopped Mar 7 01:10:44.069133 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 01:10:44.069301 kernel: SELinux: policy capability open_perms=1 Mar 7 01:10:44.069342 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 01:10:44.069365 kernel: SELinux: policy capability always_check_network=0 Mar 7 01:10:44.069386 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 01:10:44.069406 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 01:10:44.077030 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 01:10:44.077094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 01:10:44.077141 kernel: audit: type=1403 audit(1772845833.610:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 01:10:44.077167 systemd[1]: Successfully loaded SELinux policy in 310.095ms. Mar 7 01:10:44.077203 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 513.304ms. Mar 7 01:10:44.077236 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 01:10:44.077258 systemd[1]: Detected virtualization kvm. Mar 7 01:10:44.077278 systemd[1]: Detected architecture x86-64. Mar 7 01:10:44.077298 systemd[1]: Detected first boot. Mar 7 01:10:44.077318 systemd[1]: Initializing machine ID from VM UUID. Mar 7 01:10:44.077338 zram_generator::config[1065]: No configuration found. Mar 7 01:10:44.077359 systemd[1]: Populated /etc with preset unit settings. Mar 7 01:10:44.077386 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 01:10:44.077411 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 01:10:44.077502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 01:10:44.077531 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 01:10:44.077555 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 01:10:44.077576 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 01:10:44.077597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 01:10:44.077617 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 01:10:44.077635 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 01:10:44.077654 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 01:10:44.077682 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 01:10:44.077701 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 01:10:44.077720 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 01:10:44.077740 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 01:10:44.077759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 01:10:44.077778 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 01:10:44.077796 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 01:10:44.077816 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 01:10:44.077836 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 01:10:44.077861 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 01:10:44.077879 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 01:10:44.077902 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 01:10:44.078009 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 01:10:44.078035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 01:10:44.078055 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 01:10:44.078074 systemd[1]: Reached target slices.target - Slice Units. Mar 7 01:10:44.078092 systemd[1]: Reached target swap.target - Swaps. Mar 7 01:10:44.078118 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 01:10:44.078137 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 01:10:44.078155 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 01:10:44.078174 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 01:10:44.078195 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 01:10:44.078213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 01:10:44.078231 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 01:10:44.078250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 01:10:44.078269 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 01:10:44.078294 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:44.078313 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 01:10:44.078332 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 01:10:44.078350 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 01:10:44.078368 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 01:10:44.078386 systemd[1]: Reached target machines.target - Containers. Mar 7 01:10:44.078409 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 01:10:44.082592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:44.082649 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 01:10:44.082670 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 01:10:44.082690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:10:44.082709 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:10:44.082727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:10:44.082746 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 01:10:44.082765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:10:44.082784 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 01:10:44.082808 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 01:10:44.082827 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 01:10:44.082845 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 01:10:44.082863 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 01:10:44.082882 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 01:10:44.082900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 01:10:44.082995 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 01:10:44.083017 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 01:10:44.083036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 01:10:44.083062 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 01:10:44.083080 systemd[1]: Stopped verity-setup.service. Mar 7 01:10:44.083105 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:44.083124 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 01:10:44.083144 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 01:10:44.083162 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 01:10:44.083181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 01:10:44.083199 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 01:10:44.083223 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 01:10:44.083241 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 01:10:44.083260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 01:10:44.083343 systemd-journald[1150]: Collecting audit messages is disabled. Mar 7 01:10:44.088601 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 01:10:44.088656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 01:10:44.088676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:10:44.088699 systemd-journald[1150]: Journal started Mar 7 01:10:44.088736 systemd-journald[1150]: Runtime Journal (/run/log/journal/5bcc9c3ea6464eb381d89b4757f3b9b1) is 6.0M, max 48.4M, 42.3M free. Mar 7 01:10:39.098678 systemd[1]: Queued start job for default target multi-user.target. Mar 7 01:10:44.097706 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:10:39.183892 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 7 01:10:39.186543 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 01:10:39.188355 systemd[1]: systemd-journald.service: Consumed 2.627s CPU time. Mar 7 01:10:44.187686 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 01:10:44.305045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:10:44.305516 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:10:44.327715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 01:10:44.377573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 01:10:44.466413 kernel: fuse: init (API version 7.39) Mar 7 01:10:44.468404 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 01:10:44.631144 kernel: loop: module loaded Mar 7 01:10:44.720062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 01:10:44.921214 kernel: ACPI: bus type drm_connector registered Mar 7 01:10:44.934751 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 01:10:44.970320 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 01:10:44.977614 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:10:44.978376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:10:44.985139 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:10:44.986199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:10:45.082137 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 01:10:45.130291 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 01:10:45.159175 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 01:10:45.180111 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 01:10:45.180379 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 01:10:45.277184 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 01:10:45.320754 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 01:10:45.393022 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 01:10:45.413083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:45.431317 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 01:10:45.595170 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 01:10:45.621828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:10:45.653129 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 01:10:45.682375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:10:45.701187 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:10:45.737634 systemd-journald[1150]: Time spent on flushing to /var/log/journal/5bcc9c3ea6464eb381d89b4757f3b9b1 is 385.403ms for 951 entries. Mar 7 01:10:45.737634 systemd-journald[1150]: System Journal (/var/log/journal/5bcc9c3ea6464eb381d89b4757f3b9b1) is 8.0M, max 195.6M, 187.6M free. Mar 7 01:10:46.298357 systemd-journald[1150]: Received client request to flush runtime journal. Mar 7 01:10:46.300160 kernel: loop0: detected capacity change from 0 to 140768 Mar 7 01:10:45.758341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 01:10:45.811721 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 01:10:45.879886 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 01:10:46.147671 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 01:10:46.187386 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 01:10:46.206322 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 01:10:46.231423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 01:10:46.295808 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 01:10:46.394511 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 01:10:46.525736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 01:10:47.549349 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 01:10:47.655615 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 01:10:48.097425 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 01:10:48.106182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:10:48.128335 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 01:10:48.158285 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 7 01:10:48.158315 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Mar 7 01:10:48.201746 kernel: loop1: detected capacity change from 0 to 142488 Mar 7 01:10:48.222004 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 01:10:48.311085 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 01:10:48.800477 kernel: loop2: detected capacity change from 0 to 219192 Mar 7 01:10:49.195167 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 01:10:49.294642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 01:10:49.406175 kernel: loop3: detected capacity change from 0 to 140768 Mar 7 01:10:49.618551 kernel: loop4: detected capacity change from 0 to 142488 Mar 7 01:10:49.797578 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 7 01:10:49.797625 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 7 01:10:49.823838 kernel: loop5: detected capacity change from 0 to 219192 Mar 7 01:10:49.852169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 01:10:50.255498 (sd-merge)[1205]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 7 01:10:50.294741 (sd-merge)[1205]: Merged extensions into '/usr'. Mar 7 01:10:50.372973 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 01:10:50.373012 systemd[1]: Reloading... Mar 7 01:10:51.419424 zram_generator::config[1230]: No configuration found. Mar 7 01:10:53.038712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:53.282496 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 01:10:53.833618 systemd[1]: Reloading finished in 3459 ms. Mar 7 01:10:54.149706 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 01:10:54.204728 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 01:10:54.343216 systemd[1]: Starting ensure-sysext.service... Mar 7 01:10:54.375497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 01:10:54.411444 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 01:10:54.486427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 01:10:54.523711 systemd[1]: Reloading requested from client PID 1270 ('systemctl') (unit ensure-sysext.service)... Mar 7 01:10:54.523764 systemd[1]: Reloading... Mar 7 01:10:54.608498 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:10:54.609309 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:10:54.629984 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:10:54.633838 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Mar 7 01:10:54.634177 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Mar 7 01:10:54.661234 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:10:54.661525 systemd-tmpfiles[1271]: Skipping /boot Mar 7 01:10:54.690088 systemd-udevd[1274]: Using default interface naming scheme 'v255'. Mar 7 01:10:54.766773 systemd-tmpfiles[1271]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:10:54.767070 systemd-tmpfiles[1271]: Skipping /boot Mar 7 01:10:55.065603 zram_generator::config[1319]: No configuration found. Mar 7 01:10:56.127012 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1310) Mar 7 01:10:56.666776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:10:56.680004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 7 01:10:56.717037 kernel: ACPI: button: Power Button [PWRF] Mar 7 01:10:57.040077 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 7 01:10:57.095759 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 7 01:10:57.096561 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 7 01:10:57.234670 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 7 01:10:57.234803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 7 01:10:57.270317 systemd[1]: Reloading finished in 2745 ms. Mar 7 01:10:57.304652 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 7 01:10:57.447288 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 01:10:57.494639 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 01:10:57.549986 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 01:10:57.809091 systemd[1]: Finished ensure-sysext.service. Mar 7 01:10:57.857394 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:10:57.875409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:10:58.389022 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 01:10:59.289786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 01:10:59.507422 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 01:10:59.540570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 01:10:59.582617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 01:10:59.623622 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 01:10:59.645643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 01:10:59.660278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 01:10:59.749233 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 01:10:59.796648 augenrules[1392]: No rules Mar 7 01:10:59.832988 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 01:10:59.898074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 01:11:00.131394 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 01:11:00.205190 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 01:11:00.232665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 01:11:00.255631 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 7 01:11:00.273059 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:11:00.297251 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 01:11:00.297571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 01:11:00.307827 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 01:11:00.308269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 01:11:00.327111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 01:11:00.329082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 01:11:00.347351 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 01:11:00.348034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 01:11:00.373774 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 01:11:00.404880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 01:11:00.425421 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 01:11:00.626369 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 01:11:00.628463 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 01:11:00.646452 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 01:11:00.696570 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 01:11:00.719253 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 01:11:00.744315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 01:11:00.766143 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 01:11:01.791631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 01:11:01.971124 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 01:11:02.894366 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 01:11:02.936071 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 01:11:03.023240 systemd-networkd[1397]: lo: Link UP Mar 7 01:11:03.023294 systemd-networkd[1397]: lo: Gained carrier Mar 7 01:11:03.032804 systemd-networkd[1397]: Enumeration completed Mar 7 01:11:03.035393 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 01:11:03.035780 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:11:03.035788 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 01:11:03.046860 systemd-networkd[1397]: eth0: Link UP Mar 7 01:11:03.047093 systemd-networkd[1397]: eth0: Gained carrier Mar 7 01:11:03.047176 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 01:11:03.136457 systemd-resolved[1399]: Positive Trust Anchors: Mar 7 01:11:03.138190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 01:11:03.139071 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 01:11:03.139121 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 01:11:03.196046 systemd-resolved[1399]: Defaulting to hostname 'linux'. Mar 7 01:11:03.260421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 01:11:03.307204 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 7 01:11:03.312248 systemd[1]: Reached target network.target - Network. Mar 7 01:11:03.322078 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Mar 7 01:11:04.146205 systemd-resolved[1399]: Clock change detected. Flushing caches. Mar 7 01:11:04.146330 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 7 01:11:04.157135 systemd-timesyncd[1400]: Initial clock synchronization to Sat 2026-03-07 01:11:04.146066 UTC. Mar 7 01:11:04.161054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 01:11:04.902811 systemd-networkd[1397]: eth0: Gained IPv6LL Mar 7 01:11:04.925441 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 01:11:04.944654 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 01:11:05.294601 kernel: kvm_amd: TSC scaling supported Mar 7 01:11:05.310655 kernel: kvm_amd: Nested Virtualization enabled Mar 7 01:11:05.310845 kernel: kvm_amd: Nested Paging enabled Mar 7 01:11:05.310891 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 7 01:11:05.311002 kernel: kvm_amd: PMU virtualization is disabled Mar 7 01:11:05.925163 kernel: EDAC MC: Ver: 3.0.0 Mar 7 01:11:06.031321 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 01:11:06.115108 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 01:11:06.268295 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:11:06.410738 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 01:11:06.462777 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 01:11:06.495737 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 01:11:06.803255 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 01:11:06.828767 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 01:11:06.859605 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 01:11:06.896356 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 01:11:06.915598 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 01:11:06.939153 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 01:11:06.939238 systemd[1]: Reached target paths.target - Path Units. Mar 7 01:11:06.970180 systemd[1]: Reached target timers.target - Timer Units. Mar 7 01:11:07.018409 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 01:11:07.048575 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 01:11:07.245482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 01:11:07.280747 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 01:11:07.301224 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 01:11:07.332390 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 01:11:07.348982 systemd[1]: Reached target basic.target - Basic System. Mar 7 01:11:07.397219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:11:07.397280 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 01:11:07.429498 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 01:11:07.487894 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 01:11:07.499782 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 7 01:11:07.524342 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 01:11:07.666383 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 01:11:07.794844 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 01:11:07.807145 jq[1440]: false Mar 7 01:11:07.916673 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 01:11:08.271198 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:08.334428 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 01:11:08.368224 extend-filesystems[1441]: Found loop3 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found loop4 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found loop5 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found sr0 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda1 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda2 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda3 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found usr Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda4 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda6 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda7 Mar 7 01:11:08.548187 extend-filesystems[1441]: Found vda9 Mar 7 01:11:08.548187 extend-filesystems[1441]: Checking size of /dev/vda9 Mar 7 01:11:09.093817 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 7 01:11:09.093893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1298) Mar 7 01:11:08.777071 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 01:11:09.100204 extend-filesystems[1441]: Resized partition /dev/vda9 Mar 7 01:11:08.967988 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 01:11:09.148194 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Mar 7 01:11:09.241062 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 01:11:09.440294 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 01:11:10.717305 dbus-daemon[1439]: [system] SELinux support is enabled Mar 7 01:11:10.721075 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 01:11:10.745148 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 01:11:10.768348 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 01:11:10.847593 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 01:11:10.919616 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 01:11:11.000438 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 01:11:11.131852 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 01:11:11.166769 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 7 01:11:11.170037 jq[1469]: true Mar 7 01:11:11.388618 update_engine[1464]: I20260307 01:11:11.209391 1464 main.cc:92] Flatcar Update Engine starting Mar 7 01:11:11.388618 update_engine[1464]: I20260307 01:11:11.220741 1464 update_check_scheduler.cc:74] Next update check in 5m36s Mar 7 01:11:11.240804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 01:11:11.241260 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 01:11:11.271416 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 01:11:11.286894 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 01:11:11.322592 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 01:11:11.398217 systemd-logind[1462]: Watching system buttons on /dev/input/event1 (Power Button) Mar 7 01:11:11.398261 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 7 01:11:11.456586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 01:11:11.457319 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 01:11:11.460333 systemd-logind[1462]: New seat seat0. Mar 7 01:11:11.487482 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 7 01:11:11.487482 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 7 01:11:11.487482 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 7 01:11:11.568279 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Mar 7 01:11:11.489438 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 01:11:11.566168 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 01:11:11.566506 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 01:11:11.644206 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 01:11:11.787396 jq[1477]: true Mar 7 01:11:11.861723 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 7 01:11:11.862172 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 7 01:11:12.271459 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 01:11:12.864454 dbus-daemon[1439]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 01:11:12.924720 tar[1475]: linux-amd64/LICENSE Mar 7 01:11:12.924720 tar[1475]: linux-amd64/helm Mar 7 01:11:12.960737 systemd[1]: Started update-engine.service - Update Engine. Mar 7 01:11:13.030435 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 01:11:13.031688 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 01:11:13.034756 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 01:11:13.256064 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 01:11:13.256395 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 01:11:13.341116 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 01:11:13.398032 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 01:11:13.718063 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 01:11:14.140864 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Mar 7 01:11:14.149489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 01:11:14.176336 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 7 01:11:14.314727 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 01:11:14.315439 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 01:11:14.741694 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 01:11:14.787872 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 01:11:14.834104 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 01:11:14.903664 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:42766.service - OpenSSH per-connection server daemon (10.0.0.1:42766). Mar 7 01:11:15.260990 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 01:11:15.358817 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 01:11:15.427345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 01:11:15.454805 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 01:11:15.952753 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 42766 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:15.994268 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:16.152459 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 01:11:16.412355 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 01:11:16.475183 systemd-logind[1462]: New session 1 of user core. Mar 7 01:11:17.960883 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 01:11:18.076442 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 01:11:19.154243 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 01:11:23.860861 containerd[1478]: time="2026-03-07T01:11:23.858068711Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 01:11:24.849764 containerd[1478]: time="2026-03-07T01:11:24.848120321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.855336101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.855406012Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.855514625Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856007194Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856040587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856169367Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856194605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856508480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856537985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856631019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:11:24.861228 containerd[1478]: time="2026-03-07T01:11:24.856651728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.859467 systemd[1544]: Queued start job for default target default.target. Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.856815694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.858810448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.859665264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.859692535Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.860173042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.860449889Z" level=info msg="metadata content store policy set" policy=shared Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.869634235Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.870011199Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.870046325Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.870075800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.870129090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.870595761Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 01:11:24.927198 containerd[1478]: time="2026-03-07T01:11:24.871462088Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.871875459Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.871903542Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872005563Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872037573Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872099699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872164790Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872199374Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872238508Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872257383Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872279344Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872299441Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872327183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872346199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.927857 containerd[1478]: time="2026-03-07T01:11:24.872364553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872381805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872401913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872422181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872440114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872460122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872478585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872497842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872515996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.872537516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.874701896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.874734758Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.874909895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.875141798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.928467 containerd[1478]: time="2026-03-07T01:11:24.875163117Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.913843291Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.915291254Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.915360063Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.915390149Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.915411279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.915466281Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.915648050Z" level=info msg="NRI interface is disabled by configuration." Mar 7 01:11:24.929158 containerd[1478]: time="2026-03-07T01:11:24.916243332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 01:11:24.929495 containerd[1478]: time="2026-03-07T01:11:24.922029064Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 01:11:24.929495 containerd[1478]: time="2026-03-07T01:11:24.922211364Z" level=info msg="Connect containerd service" Mar 7 01:11:24.929495 containerd[1478]: time="2026-03-07T01:11:24.922456371Z" level=info msg="using legacy CRI server" Mar 7 01:11:24.929495 containerd[1478]: time="2026-03-07T01:11:24.922471700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 01:11:24.929495 containerd[1478]: time="2026-03-07T01:11:24.923317418Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 01:11:24.935510 systemd[1544]: Created slice app.slice - User Application Slice. Mar 7 01:11:24.935600 systemd[1544]: Reached target paths.target - Paths. Mar 7 01:11:24.935628 systemd[1544]: Reached target timers.target - Timers. Mar 7 01:11:24.938012 containerd[1478]: time="2026-03-07T01:11:24.936645432Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:11:24.938640 containerd[1478]: time="2026-03-07T01:11:24.938539597Z" level=info msg="Start subscribing containerd event" Mar 7 01:11:24.939695 containerd[1478]: time="2026-03-07T01:11:24.939146330Z" level=info msg="Start recovering state" Mar 7 01:11:24.939695 containerd[1478]: time="2026-03-07T01:11:24.939501253Z" level=info msg="Start event monitor" Mar 7 01:11:24.940130 containerd[1478]: time="2026-03-07T01:11:24.940034809Z" level=info msg="Start snapshots syncer" Mar 7 01:11:25.040443 containerd[1478]: time="2026-03-07T01:11:24.990139702Z" level=info msg="Start cni network conf syncer for default" Mar 7 01:11:25.116439 containerd[1478]: time="2026-03-07T01:11:25.103117494Z" level=info msg="Start streaming server" Mar 7 01:11:25.116439 containerd[1478]: time="2026-03-07T01:11:25.109325857Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 01:11:25.116439 containerd[1478]: time="2026-03-07T01:11:25.109844385Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 01:11:25.170138 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 01:11:25.247284 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 01:11:25.274724 containerd[1478]: time="2026-03-07T01:11:25.253843753Z" level=info msg="containerd successfully booted in 1.784997s" Mar 7 01:11:25.802026 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 01:11:25.802291 systemd[1544]: Reached target sockets.target - Sockets. Mar 7 01:11:25.802315 systemd[1544]: Reached target basic.target - Basic System. Mar 7 01:11:25.802393 systemd[1544]: Reached target default.target - Main User Target. Mar 7 01:11:25.802459 systemd[1544]: Startup finished in 5.973s. Mar 7 01:11:25.811911 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 01:11:26.320225 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 01:11:26.703202 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:43100.service - OpenSSH per-connection server daemon (10.0.0.1:43100). Mar 7 01:11:29.251397 tar[1475]: linux-amd64/README.md Mar 7 01:11:29.256713 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 43100 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:29.360519 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:29.768656 systemd-logind[1462]: New session 2 of user core. Mar 7 01:11:29.823612 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 01:11:29.828016 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 01:11:30.283554 sshd[1564]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:30.394501 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:43100.service: Deactivated successfully. Mar 7 01:11:30.417199 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 01:11:30.428321 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Mar 7 01:11:30.507358 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:52676.service - OpenSSH per-connection server daemon (10.0.0.1:52676). Mar 7 01:11:30.541341 systemd-logind[1462]: Removed session 2. Mar 7 01:11:30.693885 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 52676 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:30.712202 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:30.735253 systemd-logind[1462]: New session 3 of user core. Mar 7 01:11:30.807415 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 01:11:30.968279 sshd[1574]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:30.982497 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:52676.service: Deactivated successfully. Mar 7 01:11:30.998376 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 01:11:31.006241 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Mar 7 01:11:31.014203 systemd-logind[1462]: Removed session 3. Mar 7 01:11:33.229553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:11:33.254533 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 01:11:33.256469 systemd[1]: Startup finished in 14.391s (kernel) + 59.164s (initrd) + 59.139s (userspace) = 2min 12.695s. Mar 7 01:11:33.266104 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:11:41.086777 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:58100.service - OpenSSH per-connection server daemon (10.0.0.1:58100). Mar 7 01:11:41.627284 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 58100 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:41.654341 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:41.733900 systemd-logind[1462]: New session 4 of user core. Mar 7 01:11:41.759772 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 01:11:42.608836 sshd[1592]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:42.648896 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:58100.service: Deactivated successfully. Mar 7 01:11:42.667464 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 01:11:42.676887 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Mar 7 01:11:42.710684 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:58110.service - OpenSSH per-connection server daemon (10.0.0.1:58110). Mar 7 01:11:42.759753 systemd-logind[1462]: Removed session 4. Mar 7 01:11:43.609786 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 58110 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:43.614326 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:43.681673 systemd-logind[1462]: New session 5 of user core. Mar 7 01:11:43.721080 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 01:11:43.868886 sshd[1600]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:44.012568 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:58110.service: Deactivated successfully. Mar 7 01:11:44.039459 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 01:11:44.061451 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Mar 7 01:11:44.184028 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:58114.service - OpenSSH per-connection server daemon (10.0.0.1:58114). Mar 7 01:11:44.198536 systemd-logind[1462]: Removed session 5. Mar 7 01:11:44.527430 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 58114 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:44.558729 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:44.598702 systemd-logind[1462]: New session 6 of user core. Mar 7 01:11:44.642436 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 01:11:44.756912 kubelet[1585]: E0307 01:11:44.754261 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:11:44.896090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:11:44.897098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:11:44.900155 systemd[1]: kubelet.service: Consumed 13.247s CPU time. Mar 7 01:11:45.343553 sshd[1607]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:45.372623 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:58120.service - OpenSSH per-connection server daemon (10.0.0.1:58120). Mar 7 01:11:45.396149 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:58114.service: Deactivated successfully. Mar 7 01:11:45.407278 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 01:11:45.426104 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Mar 7 01:11:45.438127 systemd-logind[1462]: Removed session 6. Mar 7 01:11:45.544267 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 58120 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:45.552647 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:45.590739 systemd-logind[1462]: New session 7 of user core. Mar 7 01:11:45.606668 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 01:11:45.869897 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 01:11:45.880893 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:45.947573 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:45.965007 sshd[1613]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:46.014249 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:58120.service: Deactivated successfully. Mar 7 01:11:46.031032 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 01:11:46.049062 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Mar 7 01:11:46.130235 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:58132.service - OpenSSH per-connection server daemon (10.0.0.1:58132). Mar 7 01:11:46.138881 systemd-logind[1462]: Removed session 7. Mar 7 01:11:46.426725 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 58132 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:46.423348 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:46.525375 systemd-logind[1462]: New session 8 of user core. Mar 7 01:11:46.604477 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 01:11:46.794501 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 01:11:46.802141 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:46.864517 sudo[1627]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:46.897299 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 01:11:46.901160 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:47.569662 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 01:11:47.640417 auditctl[1630]: No rules Mar 7 01:11:47.646140 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 01:11:47.648047 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 01:11:47.681137 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 01:11:47.855456 augenrules[1648]: No rules Mar 7 01:11:47.864837 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 01:11:47.868444 sudo[1626]: pam_unix(sudo:session): session closed for user root Mar 7 01:11:47.884068 sshd[1623]: pam_unix(sshd:session): session closed for user core Mar 7 01:11:48.022280 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:58132.service: Deactivated successfully. Mar 7 01:11:48.037625 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 01:11:48.045761 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Mar 7 01:11:48.073234 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:58148.service - OpenSSH per-connection server daemon (10.0.0.1:58148). Mar 7 01:11:48.081047 systemd-logind[1462]: Removed session 8. Mar 7 01:11:48.162414 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 58148 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:11:48.186461 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:11:48.213169 systemd-logind[1462]: New session 9 of user core. Mar 7 01:11:48.229115 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 01:11:48.331539 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 01:11:48.332402 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 01:11:54.161218 (dockerd)[1677]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 01:11:54.161558 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 01:11:55.631549 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 01:11:55.737639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:11:56.591208 update_engine[1464]: I20260307 01:11:56.576247 1464 update_attempter.cc:509] Updating boot flags... Mar 7 01:11:57.138062 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1691) Mar 7 01:11:58.154231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1691) Mar 7 01:12:08.715269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:12:08.732578 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:12:08.750249 dockerd[1677]: time="2026-03-07T01:12:08.742461725Z" level=info msg="Starting up" Mar 7 01:12:11.200661 kubelet[1713]: E0307 01:12:11.199085 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:12:11.222604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:12:11.223582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:12:11.224166 systemd[1]: kubelet.service: Consumed 4.551s CPU time. Mar 7 01:12:11.633500 dockerd[1677]: time="2026-03-07T01:12:11.632160566Z" level=info msg="Loading containers: start." Mar 7 01:12:15.463441 kernel: Initializing XFRM netlink socket Mar 7 01:12:17.059857 systemd-networkd[1397]: docker0: Link UP Mar 7 01:12:17.209276 dockerd[1677]: time="2026-03-07T01:12:17.206752395Z" level=info msg="Loading containers: done." Mar 7 01:12:17.409110 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1621960763-merged.mount: Deactivated successfully. Mar 7 01:12:17.427020 dockerd[1677]: time="2026-03-07T01:12:17.425009049Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 01:12:17.427020 dockerd[1677]: time="2026-03-07T01:12:17.425569357Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 01:12:17.427020 dockerd[1677]: time="2026-03-07T01:12:17.426011503Z" level=info msg="Daemon has completed initialization" Mar 7 01:12:17.945191 dockerd[1677]: time="2026-03-07T01:12:17.937818763Z" level=info msg="API listen on /run/docker.sock" Mar 7 01:12:17.991857 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 01:12:21.326687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 01:12:21.398624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:12:25.637347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:12:25.850663 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:12:28.268432 containerd[1478]: time="2026-03-07T01:12:28.267623966Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 7 01:12:29.306018 kubelet[1863]: E0307 01:12:29.303278 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:12:29.402199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:12:29.406489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:12:29.408472 systemd[1]: kubelet.service: Consumed 3.950s CPU time. Mar 7 01:12:32.354178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508683628.mount: Deactivated successfully. Mar 7 01:12:39.517630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 01:12:39.563464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:12:42.342240 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:12:42.343902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:12:44.319074 kubelet[1943]: E0307 01:12:44.317366 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:12:44.336643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:12:44.337039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:12:44.339434 systemd[1]: kubelet.service: Consumed 2.037s CPU time. Mar 7 01:12:52.535567 containerd[1478]: time="2026-03-07T01:12:52.531174403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:12:52.640561 containerd[1478]: time="2026-03-07T01:12:52.626655586Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 7 01:12:52.752113 containerd[1478]: time="2026-03-07T01:12:52.751241238Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:12:52.774470 containerd[1478]: time="2026-03-07T01:12:52.772429647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:12:52.787820 containerd[1478]: time="2026-03-07T01:12:52.784636575Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 24.51657963s" Mar 7 01:12:52.787820 containerd[1478]: time="2026-03-07T01:12:52.784765419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 7 01:12:52.813493 containerd[1478]: time="2026-03-07T01:12:52.813021250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 7 01:12:54.500806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 01:12:54.587871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:12:56.958074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:12:56.960733 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:12:57.682198 kubelet[1963]: E0307 01:12:57.681601 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:12:57.712898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:12:57.715002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:12:57.715996 systemd[1]: kubelet.service: Consumed 1.160s CPU time. Mar 7 01:13:07.841297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 01:13:07.911366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:13:08.251366 containerd[1478]: time="2026-03-07T01:13:08.251118397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:08.263488 containerd[1478]: time="2026-03-07T01:13:08.260317639Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 7 01:13:08.289136 containerd[1478]: time="2026-03-07T01:13:08.283248886Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:09.038100 containerd[1478]: time="2026-03-07T01:13:09.024423651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:09.047345 containerd[1478]: time="2026-03-07T01:13:09.041503755Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 16.228426089s" Mar 7 01:13:09.047345 containerd[1478]: time="2026-03-07T01:13:09.041563759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 7 01:13:09.060176 containerd[1478]: time="2026-03-07T01:13:09.059401524Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 7 01:13:11.305740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:13:11.354902 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:13:13.516614 kubelet[1978]: E0307 01:13:13.515609 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:13:13.807771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:13:13.826977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:13:13.856439 systemd[1]: kubelet.service: Consumed 2.769s CPU time. Mar 7 01:13:19.015473 containerd[1478]: time="2026-03-07T01:13:19.014877706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:19.025155 containerd[1478]: time="2026-03-07T01:13:19.021128847Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 7 01:13:19.029813 containerd[1478]: time="2026-03-07T01:13:19.027206877Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:19.040297 containerd[1478]: time="2026-03-07T01:13:19.040147675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:19.063608 containerd[1478]: time="2026-03-07T01:13:19.062242005Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 10.002512847s" Mar 7 01:13:19.063608 containerd[1478]: time="2026-03-07T01:13:19.063302832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 7 01:13:19.085682 containerd[1478]: time="2026-03-07T01:13:19.083701767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 7 01:13:24.007271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 7 01:13:24.075373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:13:25.795813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:13:26.104776 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:13:27.825773 kubelet[2004]: E0307 01:13:27.823704 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:13:27.842438 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:13:27.844242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:13:27.865053 systemd[1]: kubelet.service: Consumed 1.316s CPU time. Mar 7 01:13:28.718478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799657537.mount: Deactivated successfully. Mar 7 01:13:38.033322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 7 01:13:38.103140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:13:41.102418 containerd[1478]: time="2026-03-07T01:13:41.100973032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:41.102418 containerd[1478]: time="2026-03-07T01:13:41.101731596Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 7 01:13:41.121764 containerd[1478]: time="2026-03-07T01:13:41.118653521Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:41.190301 containerd[1478]: time="2026-03-07T01:13:41.185053469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:13:41.195527 containerd[1478]: time="2026-03-07T01:13:41.195463586Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 22.111700141s" Mar 7 01:13:41.197040 containerd[1478]: time="2026-03-07T01:13:41.196995951Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 7 01:13:41.557095 containerd[1478]: time="2026-03-07T01:13:41.554527036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 7 01:13:42.602872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:13:42.667325 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:13:45.541056 kubelet[2025]: E0307 01:13:45.536720 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:13:45.550497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:13:45.610264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:13:45.625708 systemd[1]: kubelet.service: Consumed 3.459s CPU time. Mar 7 01:13:45.880480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2367767430.mount: Deactivated successfully. Mar 7 01:13:57.125005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Mar 7 01:13:57.919663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:13.535545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:13.561259 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:14.670688 kubelet[2064]: E0307 01:14:14.664682 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:14.681658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:14.684163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:14.685116 systemd[1]: kubelet.service: Consumed 4.618s CPU time. Mar 7 01:14:25.313020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Mar 7 01:14:25.363466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:29.832767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:29.953667 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:31.497165 kubelet[2109]: E0307 01:14:31.495822 2109 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:31.549081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:31.551434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:31.556070 systemd[1]: kubelet.service: Consumed 2.034s CPU time. Mar 7 01:14:33.271884 containerd[1478]: time="2026-03-07T01:14:33.260789386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:33.293471 containerd[1478]: time="2026-03-07T01:14:33.282735432Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 7 01:14:33.299296 containerd[1478]: time="2026-03-07T01:14:33.298666557Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:33.322506 containerd[1478]: time="2026-03-07T01:14:33.322346109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:33.324073 containerd[1478]: time="2026-03-07T01:14:33.323994856Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 51.768451228s" Mar 7 01:14:33.327371 containerd[1478]: time="2026-03-07T01:14:33.327278299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 7 01:14:33.357009 containerd[1478]: time="2026-03-07T01:14:33.356394769Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 01:14:34.966750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754980297.mount: Deactivated successfully. Mar 7 01:14:35.101017 containerd[1478]: time="2026-03-07T01:14:35.100399753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:35.131180 containerd[1478]: time="2026-03-07T01:14:35.129891835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 7 01:14:35.141345 containerd[1478]: time="2026-03-07T01:14:35.141238174Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:35.199345 containerd[1478]: time="2026-03-07T01:14:35.198681993Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.842219922s" Mar 7 01:14:35.199345 containerd[1478]: time="2026-03-07T01:14:35.199020132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 7 01:14:35.202726 containerd[1478]: time="2026-03-07T01:14:35.201409442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:35.220031 containerd[1478]: time="2026-03-07T01:14:35.219833937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 7 01:14:36.623288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount299250129.mount: Deactivated successfully. Mar 7 01:14:41.982249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Mar 7 01:14:42.024529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:14:44.891711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:14:44.941811 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:14:48.000406 kubelet[2174]: E0307 01:14:47.999796 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:14:48.338703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:14:48.352490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:14:48.353489 systemd[1]: kubelet.service: Consumed 2.850s CPU time. Mar 7 01:14:58.237878 containerd[1478]: time="2026-03-07T01:14:58.236401490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:58.249740 containerd[1478]: time="2026-03-07T01:14:58.247133009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 7 01:14:58.255401 containerd[1478]: time="2026-03-07T01:14:58.255067729Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:58.291445 containerd[1478]: time="2026-03-07T01:14:58.290712635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:14:58.300320 containerd[1478]: time="2026-03-07T01:14:58.298422461Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 23.07850029s" Mar 7 01:14:58.300320 containerd[1478]: time="2026-03-07T01:14:58.298521110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 7 01:14:58.514274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Mar 7 01:14:58.747655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:01.605998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:01.659544 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:03.048700 kubelet[2223]: E0307 01:15:03.047798 2223 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:03.073249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:03.078826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:03.367294 systemd[1]: kubelet.service: Consumed 1.208s CPU time. Mar 7 01:15:13.248761 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Mar 7 01:15:13.283902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:14.898778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:14.918795 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 01:15:15.428368 kubelet[2249]: E0307 01:15:15.428047 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 01:15:15.443783 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 01:15:15.444164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 01:15:18.840140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:18.892266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:19.189599 systemd[1]: Reloading requested from client PID 2265 ('systemctl') (unit session-9.scope)... Mar 7 01:15:19.189682 systemd[1]: Reloading... Mar 7 01:15:20.005554 zram_generator::config[2307]: No configuration found. Mar 7 01:15:21.062774 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:15:21.542744 systemd[1]: Reloading finished in 2347 ms. Mar 7 01:15:21.984808 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 01:15:21.997146 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 01:15:21.999615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:22.041717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:15:26.060534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:15:26.216491 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:15:26.862554 kubelet[2350]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:15:26.862554 kubelet[2350]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:15:26.884524 kubelet[2350]: I0307 01:15:26.866374 2350 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:15:31.503401 kubelet[2350]: I0307 01:15:31.501364 2350 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:15:31.503401 kubelet[2350]: I0307 01:15:31.501430 2350 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:15:31.503401 kubelet[2350]: I0307 01:15:31.501535 2350 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:15:31.503401 kubelet[2350]: I0307 01:15:31.501552 2350 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:15:31.519675 kubelet[2350]: I0307 01:15:31.505809 2350 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:15:31.545539 kubelet[2350]: E0307 01:15:31.540762 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:31.559079 kubelet[2350]: I0307 01:15:31.555914 2350 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:15:31.647203 kubelet[2350]: E0307 01:15:31.639835 2350 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:15:31.647203 kubelet[2350]: I0307 01:15:31.640134 2350 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:15:31.712867 kubelet[2350]: I0307 01:15:31.712729 2350 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:15:31.717992 kubelet[2350]: I0307 01:15:31.717431 2350 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:15:31.723834 kubelet[2350]: I0307 01:15:31.717489 2350 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:15:31.723834 kubelet[2350]: I0307 01:15:31.722709 2350 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:15:31.723834 kubelet[2350]: I0307 01:15:31.722730 2350 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:15:31.723834 kubelet[2350]: I0307 01:15:31.723171 2350 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:15:31.745873 kubelet[2350]: I0307 01:15:31.745685 2350 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:31.746657 kubelet[2350]: I0307 01:15:31.746449 2350 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:15:31.746657 kubelet[2350]: I0307 01:15:31.746520 2350 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:15:31.746787 kubelet[2350]: I0307 01:15:31.746662 2350 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:15:31.746787 kubelet[2350]: I0307 01:15:31.746780 2350 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:15:31.752222 kubelet[2350]: E0307 01:15:31.751082 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:31.752222 kubelet[2350]: E0307 01:15:31.751268 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:31.794058 kubelet[2350]: I0307 01:15:31.763753 2350 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:15:31.796500 kubelet[2350]: I0307 01:15:31.796302 2350 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:15:31.796500 kubelet[2350]: I0307 01:15:31.796461 2350 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:15:31.796715 kubelet[2350]: W0307 01:15:31.796681 2350 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 01:15:32.041754 kubelet[2350]: I0307 01:15:32.041203 2350 server.go:1262] "Started kubelet" Mar 7 01:15:32.108547 kubelet[2350]: I0307 01:15:32.054581 2350 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:15:32.108547 kubelet[2350]: I0307 01:15:32.056496 2350 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:15:32.108547 kubelet[2350]: I0307 01:15:32.056696 2350 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:15:32.108547 kubelet[2350]: I0307 01:15:32.064226 2350 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:15:32.108547 kubelet[2350]: I0307 01:15:32.066724 2350 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:15:32.207499 kubelet[2350]: I0307 01:15:32.207069 2350 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:15:32.228010 kubelet[2350]: I0307 01:15:32.216528 2350 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:15:32.228010 kubelet[2350]: I0307 01:15:32.227544 2350 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:15:32.228266 kubelet[2350]: I0307 01:15:32.227884 2350 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:15:32.228667 kubelet[2350]: I0307 01:15:32.228649 2350 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:15:32.239454 kubelet[2350]: E0307 01:15:32.234337 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:32.289500 kubelet[2350]: E0307 01:15:32.273408 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" Mar 7 01:15:32.290809 kubelet[2350]: E0307 01:15:32.290621 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:32.363162 kubelet[2350]: E0307 01:15:32.290871 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6a2a054944c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,LastTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:15:32.443566 kubelet[2350]: E0307 01:15:32.363174 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:32.499039 kubelet[2350]: E0307 01:15:32.498857 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:32.524047 kubelet[2350]: E0307 01:15:32.508416 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" Mar 7 01:15:32.524047 kubelet[2350]: I0307 01:15:32.510382 2350 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:15:32.548844 kubelet[2350]: E0307 01:15:32.526813 2350 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:15:32.630588 kubelet[2350]: E0307 01:15:32.608405 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:32.647248 kubelet[2350]: I0307 01:15:32.647208 2350 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:15:32.647543 kubelet[2350]: I0307 01:15:32.647524 2350 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:15:32.722173 kubelet[2350]: E0307 01:15:32.718159 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:32.818519 kubelet[2350]: E0307 01:15:32.818467 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:33.054845 kubelet[2350]: E0307 01:15:32.990754 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:33.121073 kubelet[2350]: E0307 01:15:33.120761 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" Mar 7 01:15:33.200714 kubelet[2350]: E0307 01:15:33.200510 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:33.264195 kubelet[2350]: I0307 01:15:33.257435 2350 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:15:33.264195 kubelet[2350]: E0307 01:15:33.264346 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:33.297526 kubelet[2350]: E0307 01:15:33.297215 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:33.309520 kubelet[2350]: E0307 01:15:33.309263 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:33.332279 kubelet[2350]: I0307 01:15:33.332122 2350 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:15:33.332279 kubelet[2350]: I0307 01:15:33.332195 2350 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:15:33.332499 kubelet[2350]: I0307 01:15:33.332455 2350 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:15:33.332766 kubelet[2350]: E0307 01:15:33.332655 2350 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:15:33.347492 kubelet[2350]: E0307 01:15:33.341471 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:33.347492 kubelet[2350]: E0307 01:15:33.341717 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:33.365706 kubelet[2350]: I0307 01:15:33.365637 2350 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:15:33.366031 kubelet[2350]: I0307 01:15:33.365898 2350 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:15:33.366236 kubelet[2350]: I0307 01:15:33.366215 2350 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:15:33.401385 kubelet[2350]: I0307 01:15:33.400297 2350 policy_none.go:49] "None policy: Start" Mar 7 01:15:33.401385 kubelet[2350]: I0307 01:15:33.400441 2350 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:15:33.401385 kubelet[2350]: I0307 01:15:33.400505 2350 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:15:33.411723 kubelet[2350]: I0307 01:15:33.409381 2350 policy_none.go:47] "Start" Mar 7 01:15:33.414793 kubelet[2350]: E0307 01:15:33.413174 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:33.434097 kubelet[2350]: E0307 01:15:33.433764 2350 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:15:33.457732 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 01:15:33.515814 kubelet[2350]: E0307 01:15:33.515153 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:15:33.534415 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 01:15:33.550480 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 01:15:33.578770 kubelet[2350]: E0307 01:15:33.577828 2350 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:15:33.606140 kubelet[2350]: I0307 01:15:33.597559 2350 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:15:33.606140 kubelet[2350]: I0307 01:15:33.597693 2350 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:15:33.606140 kubelet[2350]: I0307 01:15:33.605159 2350 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:15:33.611375 kubelet[2350]: E0307 01:15:33.610750 2350 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:15:33.619508 kubelet[2350]: E0307 01:15:33.616867 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:15:33.697311 kubelet[2350]: E0307 01:15:33.695310 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:33.843220 kubelet[2350]: I0307 01:15:33.816496 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:15:33.843220 kubelet[2350]: I0307 01:15:33.816676 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f54a05ccde2a0003764b2d7cbdcc31bf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f54a05ccde2a0003764b2d7cbdcc31bf\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:15:33.843220 kubelet[2350]: I0307 01:15:33.817123 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f54a05ccde2a0003764b2d7cbdcc31bf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f54a05ccde2a0003764b2d7cbdcc31bf\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:15:33.843220 kubelet[2350]: I0307 01:15:33.817158 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f54a05ccde2a0003764b2d7cbdcc31bf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f54a05ccde2a0003764b2d7cbdcc31bf\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:15:33.866260 kubelet[2350]: I0307 01:15:33.860776 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:33.880785 kubelet[2350]: E0307 01:15:33.880736 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:33.969323 kubelet[2350]: E0307 01:15:33.960429 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" Mar 7 01:15:33.999775 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 7 01:15:34.046535 kubelet[2350]: I0307 01:15:34.041778 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:15:34.046535 kubelet[2350]: I0307 01:15:34.041823 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:15:34.046535 kubelet[2350]: I0307 01:15:34.041854 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:15:34.046535 kubelet[2350]: I0307 01:15:34.041883 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:15:34.046535 kubelet[2350]: I0307 01:15:34.042135 2350 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:15:34.046535 kubelet[2350]: E0307 01:15:34.043202 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:34.067559 kubelet[2350]: E0307 01:15:34.063547 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:34.085132 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 7 01:15:34.106618 containerd[1478]: time="2026-03-07T01:15:34.088843461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:34.200381 kubelet[2350]: I0307 01:15:34.104883 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:34.200381 kubelet[2350]: E0307 01:15:34.105716 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:34.242238 kubelet[2350]: E0307 01:15:34.233876 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:34.363715 systemd[1]: Created slice kubepods-burstable-podf54a05ccde2a0003764b2d7cbdcc31bf.slice - libcontainer container kubepods-burstable-podf54a05ccde2a0003764b2d7cbdcc31bf.slice. Mar 7 01:15:34.385718 kubelet[2350]: E0307 01:15:34.385677 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:34.420261 kubelet[2350]: E0307 01:15:34.413041 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:34.420486 containerd[1478]: time="2026-03-07T01:15:34.414089469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f54a05ccde2a0003764b2d7cbdcc31bf,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:34.452848 kubelet[2350]: E0307 01:15:34.451288 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6a2a054944c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,LastTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:15:34.627671 kubelet[2350]: E0307 01:15:34.563387 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:34.627671 kubelet[2350]: I0307 01:15:34.579355 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:34.627671 kubelet[2350]: E0307 01:15:34.627447 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:34.669590 kubelet[2350]: E0307 01:15:34.669429 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:34.680123 containerd[1478]: time="2026-03-07T01:15:34.677361743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 7 01:15:35.702567 kubelet[2350]: I0307 01:15:35.702233 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:35.813464 kubelet[2350]: E0307 01:15:35.686899 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="3.2s" Mar 7 01:15:35.813464 kubelet[2350]: E0307 01:15:35.803554 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:35.946897 kubelet[2350]: E0307 01:15:35.946684 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:36.020835 kubelet[2350]: E0307 01:15:36.020771 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:36.119462 kubelet[2350]: E0307 01:15:36.118434 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:36.501874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934590540.mount: Deactivated successfully. Mar 7 01:15:36.558624 containerd[1478]: time="2026-03-07T01:15:36.558350455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:36.589735 containerd[1478]: time="2026-03-07T01:15:36.589608822Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:36.600752 containerd[1478]: time="2026-03-07T01:15:36.600528067Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:36.616210 containerd[1478]: time="2026-03-07T01:15:36.613338047Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:36.628475 containerd[1478]: time="2026-03-07T01:15:36.628201216Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:36.640089 containerd[1478]: time="2026-03-07T01:15:36.639746012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 7 01:15:36.653494 containerd[1478]: time="2026-03-07T01:15:36.653389941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 01:15:36.694617 containerd[1478]: time="2026-03-07T01:15:36.689611326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 01:15:36.696653 containerd[1478]: time="2026-03-07T01:15:36.696145075Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.595390226s" Mar 7 01:15:36.710378 containerd[1478]: time="2026-03-07T01:15:36.709824391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.295617831s" Mar 7 01:15:36.715873 containerd[1478]: time="2026-03-07T01:15:36.711803971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.034136739s" Mar 7 01:15:36.814251 kubelet[2350]: E0307 01:15:36.807609 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:37.536862 kubelet[2350]: I0307 01:15:37.535398 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:37.536862 kubelet[2350]: E0307 01:15:37.536492 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:38.715357 kubelet[2350]: E0307 01:15:38.702823 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:39.040220 kubelet[2350]: E0307 01:15:39.035709 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="6.4s" Mar 7 01:15:41.264843 kubelet[2350]: E0307 01:15:41.263653 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:15:41.264843 kubelet[2350]: E0307 01:15:41.264078 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:41.304408 kubelet[2350]: E0307 01:15:41.304316 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:15:41.314146 kubelet[2350]: E0307 01:15:41.314048 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:41.318671 kubelet[2350]: I0307 01:15:41.317296 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:41.322657 kubelet[2350]: E0307 01:15:41.320055 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.327354330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.327549153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.327716454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.327784684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.327713183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.327788425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:41.332046 containerd[1478]: time="2026-03-07T01:15:41.329382022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:41.342310 containerd[1478]: time="2026-03-07T01:15:41.335695337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:41.438303 containerd[1478]: time="2026-03-07T01:15:41.432786291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:15:41.438303 containerd[1478]: time="2026-03-07T01:15:41.432867395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:15:41.438303 containerd[1478]: time="2026-03-07T01:15:41.432998141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:41.438303 containerd[1478]: time="2026-03-07T01:15:41.433144167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:15:41.859434 systemd[1]: Started cri-containerd-4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff.scope - libcontainer container 4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff. Mar 7 01:15:41.877802 systemd[1]: Started cri-containerd-7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e.scope - libcontainer container 7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e. Mar 7 01:15:42.086504 systemd[1]: Started cri-containerd-861ce371ae42598eb0a3c1515d6decb7a9f6f6c8c5fdd9699c95edc2172a3737.scope - libcontainer container 861ce371ae42598eb0a3c1515d6decb7a9f6f6c8c5fdd9699c95edc2172a3737. Mar 7 01:15:43.187743 containerd[1478]: time="2026-03-07T01:15:43.187168451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e\"" Mar 7 01:15:43.198401 kubelet[2350]: E0307 01:15:43.196773 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:44.200764 kubelet[2350]: E0307 01:15:44.200398 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:15:44.224427 containerd[1478]: time="2026-03-07T01:15:44.210711659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f54a05ccde2a0003764b2d7cbdcc31bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"861ce371ae42598eb0a3c1515d6decb7a9f6f6c8c5fdd9699c95edc2172a3737\"" Mar 7 01:15:44.224427 containerd[1478]: time="2026-03-07T01:15:44.219575973Z" level=info msg="CreateContainer within sandbox \"7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 01:15:44.267713 containerd[1478]: time="2026-03-07T01:15:44.263676684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff\"" Mar 7 01:15:44.267870 kubelet[2350]: E0307 01:15:44.265318 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:44.267870 kubelet[2350]: E0307 01:15:44.265688 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:44.316998 containerd[1478]: time="2026-03-07T01:15:44.316596447Z" level=info msg="CreateContainer within sandbox \"4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 01:15:44.334022 containerd[1478]: time="2026-03-07T01:15:44.331447040Z" level=info msg="CreateContainer within sandbox \"861ce371ae42598eb0a3c1515d6decb7a9f6f6c8c5fdd9699c95edc2172a3737\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 01:15:44.456472 kubelet[2350]: E0307 01:15:44.455164 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189a6a2a054944c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,LastTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:15:44.547164 containerd[1478]: time="2026-03-07T01:15:44.542765249Z" level=info msg="CreateContainer within sandbox \"7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f\"" Mar 7 01:15:44.570304 containerd[1478]: time="2026-03-07T01:15:44.564168110Z" level=info msg="StartContainer for \"a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f\"" Mar 7 01:15:44.611562 containerd[1478]: time="2026-03-07T01:15:44.605579027Z" level=info msg="CreateContainer within sandbox \"861ce371ae42598eb0a3c1515d6decb7a9f6f6c8c5fdd9699c95edc2172a3737\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9cfc03c1c6f4fdabcfda6658bccfebea601b665a52043da27eeac917921d0103\"" Mar 7 01:15:44.611562 containerd[1478]: time="2026-03-07T01:15:44.606857339Z" level=info msg="StartContainer for \"9cfc03c1c6f4fdabcfda6658bccfebea601b665a52043da27eeac917921d0103\"" Mar 7 01:15:44.669168 containerd[1478]: time="2026-03-07T01:15:44.667544833Z" level=info msg="CreateContainer within sandbox \"4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4\"" Mar 7 01:15:44.681910 containerd[1478]: time="2026-03-07T01:15:44.678014858Z" level=info msg="StartContainer for \"062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4\"" Mar 7 01:15:45.137522 systemd[1]: Started cri-containerd-a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f.scope - libcontainer container a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f. Mar 7 01:15:45.934632 kubelet[2350]: E0307 01:15:45.925863 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="7s" Mar 7 01:15:46.015392 systemd[1]: Started cri-containerd-062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4.scope - libcontainer container 062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4. Mar 7 01:15:46.027640 systemd[1]: Started cri-containerd-9cfc03c1c6f4fdabcfda6658bccfebea601b665a52043da27eeac917921d0103.scope - libcontainer container 9cfc03c1c6f4fdabcfda6658bccfebea601b665a52043da27eeac917921d0103. Mar 7 01:15:46.672410 containerd[1478]: time="2026-03-07T01:15:46.665907736Z" level=info msg="StartContainer for \"a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f\" returns successfully" Mar 7 01:15:47.725072 kubelet[2350]: E0307 01:15:47.709598 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:15:47.968259 kubelet[2350]: I0307 01:15:47.967663 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:15:48.312288 kubelet[2350]: E0307 01:15:48.291109 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" Mar 7 01:15:48.843769 containerd[1478]: time="2026-03-07T01:15:48.843701413Z" level=info msg="StartContainer for \"9cfc03c1c6f4fdabcfda6658bccfebea601b665a52043da27eeac917921d0103\" returns successfully" Mar 7 01:15:48.955816 containerd[1478]: time="2026-03-07T01:15:48.952059078Z" level=info msg="StartContainer for \"062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4\" returns successfully" Mar 7 01:15:48.989127 kubelet[2350]: E0307 01:15:48.968668 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:48.989127 kubelet[2350]: E0307 01:15:48.969192 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:49.802826 kubelet[2350]: E0307 01:15:49.801766 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:15:49.822830 kubelet[2350]: E0307 01:15:49.806890 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:15:50.707798 kubelet[2350]: E0307 01:15:50.707703 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:50.713859 kubelet[2350]: E0307 01:15:50.713823 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:50.719791 kubelet[2350]: E0307 01:15:50.719752 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:50.720287 kubelet[2350]: E0307 01:15:50.720264 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:50.744634 kubelet[2350]: E0307 01:15:50.744511 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:50.755586 kubelet[2350]: E0307 01:15:50.755538 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:52.173359 kubelet[2350]: E0307 01:15:52.162352 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:52.173359 kubelet[2350]: E0307 01:15:52.165722 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:52.242893 kubelet[2350]: E0307 01:15:52.224803 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:52.242893 kubelet[2350]: E0307 01:15:52.229677 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:53.202659 kubelet[2350]: E0307 01:15:53.195620 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:53.202659 kubelet[2350]: E0307 01:15:53.196246 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:53.234088 kubelet[2350]: E0307 01:15:53.222835 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:53.234088 kubelet[2350]: E0307 01:15:53.223203 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:54.240113 kubelet[2350]: E0307 01:15:54.215674 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:15:54.821005 kubelet[2350]: E0307 01:15:54.818608 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:15:54.821005 kubelet[2350]: E0307 01:15:54.819141 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:15:55.450914 kubelet[2350]: I0307 01:15:55.444007 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:16:00.712172 kubelet[2350]: E0307 01:16:00.711651 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:00.734213 kubelet[2350]: E0307 01:16:00.730810 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:00.734213 kubelet[2350]: E0307 01:16:00.731414 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:00.734213 kubelet[2350]: E0307 01:16:00.731912 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:01.916863 kubelet[2350]: E0307 01:16:01.904833 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 01:16:03.058257 kubelet[2350]: E0307 01:16:03.052374 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 7 01:16:03.312143 kubelet[2350]: E0307 01:16:03.296750 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 01:16:04.238481 kubelet[2350]: E0307 01:16:04.236026 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:16:04.663019 kubelet[2350]: E0307 01:16:04.550328 2350 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189a6a2a054944c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,LastTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:16:04.894814 kubelet[2350]: E0307 01:16:04.894697 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:04.896231 kubelet[2350]: E0307 01:16:04.895222 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:05.490282 kubelet[2350]: E0307 01:16:05.479752 2350 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 7 01:16:12.538362 kubelet[2350]: I0307 01:16:12.526272 2350 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:16:16.261310 kubelet[2350]: E0307 01:16:16.253586 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 01:16:16.297019 kubelet[2350]: E0307 01:16:16.257151 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:16:16.320701 kubelet[2350]: E0307 01:16:16.310800 2350 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:16:16.320701 kubelet[2350]: E0307 01:16:16.310877 2350 certificate_manager.go:461] "Reached backoff limit, still unable to rotate certs" err="timed out waiting for the condition" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 01:16:18.107023 kubelet[2350]: E0307 01:16:18.104625 2350 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 01:16:20.068266 kubelet[2350]: E0307 01:16:20.067215 2350 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 7 01:16:20.134122 kubelet[2350]: E0307 01:16:20.133883 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:20.140064 kubelet[2350]: E0307 01:16:20.134606 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:22.885413 kubelet[2350]: I0307 01:16:22.884882 2350 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:16:22.885413 kubelet[2350]: E0307 01:16:22.885172 2350 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:16:23.064308 kubelet[2350]: E0307 01:16:23.030412 2350 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6a2a054944c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,LastTimestamp:2026-03-07 01:15:32.033569988 +0000 UTC m=+5.765447138,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:16:23.318133 kubelet[2350]: E0307 01:16:23.299644 2350 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189a6a2a22ae5d1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-07 01:15:32.526734623 +0000 UTC m=+6.258611743,LastTimestamp:2026-03-07 01:15:32.526734623 +0000 UTC m=+6.258611743,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 7 01:16:23.318133 kubelet[2350]: E0307 01:16:23.313115 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:23.426812 kubelet[2350]: E0307 01:16:23.415457 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:23.684483 kubelet[2350]: E0307 01:16:23.642106 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:23.749242 kubelet[2350]: E0307 01:16:23.748996 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:23.852117 kubelet[2350]: E0307 01:16:23.850513 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:23.951864 kubelet[2350]: E0307 01:16:23.951439 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.053132 kubelet[2350]: E0307 01:16:24.053084 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.181152 kubelet[2350]: E0307 01:16:24.169598 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.296662 kubelet[2350]: E0307 01:16:24.295512 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.398314 kubelet[2350]: E0307 01:16:24.398216 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.501113 kubelet[2350]: E0307 01:16:24.500998 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.625994 kubelet[2350]: E0307 01:16:24.604398 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.725295 kubelet[2350]: E0307 01:16:24.725223 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.828188 kubelet[2350]: E0307 01:16:24.828125 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:24.989510 kubelet[2350]: E0307 01:16:24.962729 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:25.264982 kubelet[2350]: E0307 01:16:25.225199 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:25.666429 kubelet[2350]: E0307 01:16:25.635355 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:25.755039 kubelet[2350]: E0307 01:16:25.736697 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:26.073185 kubelet[2350]: E0307 01:16:26.067053 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:26.514072 kubelet[2350]: E0307 01:16:26.432657 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:16:26.514072 kubelet[2350]: E0307 01:16:26.451128 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:26.560319 kubelet[2350]: E0307 01:16:26.556135 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:26.662076 kubelet[2350]: E0307 01:16:26.661906 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:26.763099 kubelet[2350]: E0307 01:16:26.762851 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:27.018427 kubelet[2350]: E0307 01:16:27.017425 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:27.133907 kubelet[2350]: E0307 01:16:27.133739 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:27.434139 kubelet[2350]: E0307 01:16:27.248430 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:27.507473 kubelet[2350]: E0307 01:16:27.492548 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:27.790655 kubelet[2350]: E0307 01:16:27.732676 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:27.950897 kubelet[2350]: E0307 01:16:27.939458 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.098634 kubelet[2350]: E0307 01:16:28.069010 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.170299 kubelet[2350]: E0307 01:16:28.169830 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.305208 kubelet[2350]: E0307 01:16:28.304302 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.464891 kubelet[2350]: E0307 01:16:28.437533 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.545084 kubelet[2350]: E0307 01:16:28.544060 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.652988 kubelet[2350]: E0307 01:16:28.650379 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.812495 kubelet[2350]: E0307 01:16:28.809074 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:28.910526 kubelet[2350]: E0307 01:16:28.910444 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.014349 kubelet[2350]: E0307 01:16:29.014057 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.121458 kubelet[2350]: E0307 01:16:29.114317 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.221490 kubelet[2350]: E0307 01:16:29.221280 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.327331 kubelet[2350]: E0307 01:16:29.326396 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.431712 kubelet[2350]: E0307 01:16:29.430330 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.559271 kubelet[2350]: E0307 01:16:29.543592 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.660088 kubelet[2350]: E0307 01:16:29.660019 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.761471 kubelet[2350]: E0307 01:16:29.761315 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.862716 kubelet[2350]: E0307 01:16:29.861551 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:29.964635 kubelet[2350]: E0307 01:16:29.964555 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.064873 kubelet[2350]: E0307 01:16:30.064641 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.184066 kubelet[2350]: E0307 01:16:30.164886 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.283732 kubelet[2350]: E0307 01:16:30.283610 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.395670 kubelet[2350]: E0307 01:16:30.395454 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.501198 kubelet[2350]: E0307 01:16:30.496192 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.614323 kubelet[2350]: E0307 01:16:30.614259 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.719173 kubelet[2350]: E0307 01:16:30.714888 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.817402 kubelet[2350]: E0307 01:16:30.817103 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:30.920056 kubelet[2350]: E0307 01:16:30.918646 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.027312 kubelet[2350]: E0307 01:16:31.024212 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.129430 kubelet[2350]: E0307 01:16:31.127549 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.229428 kubelet[2350]: E0307 01:16:31.229373 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.333703 kubelet[2350]: E0307 01:16:31.332557 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.451252 kubelet[2350]: E0307 01:16:31.440713 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.549608 kubelet[2350]: E0307 01:16:31.547372 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.661539 kubelet[2350]: E0307 01:16:31.658874 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.771568 kubelet[2350]: E0307 01:16:31.766266 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:31.867596 kubelet[2350]: E0307 01:16:31.867535 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.037561 kubelet[2350]: E0307 01:16:32.036399 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.139463 kubelet[2350]: E0307 01:16:32.138126 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.244491 kubelet[2350]: E0307 01:16:32.243493 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.346317 kubelet[2350]: E0307 01:16:32.345305 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.478702 kubelet[2350]: E0307 01:16:32.451331 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.867058 kubelet[2350]: E0307 01:16:32.866673 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:32.971067 kubelet[2350]: E0307 01:16:32.967359 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.196276 kubelet[2350]: E0307 01:16:33.140360 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.255531 kubelet[2350]: E0307 01:16:33.248535 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.420188 kubelet[2350]: E0307 01:16:33.393544 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.497285 kubelet[2350]: E0307 01:16:33.496143 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.597471 kubelet[2350]: E0307 01:16:33.597369 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.734175 kubelet[2350]: E0307 01:16:33.727192 2350 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:16:33.824205 kubelet[2350]: E0307 01:16:33.823106 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:33.941470 kubelet[2350]: E0307 01:16:33.939060 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.054864 kubelet[2350]: E0307 01:16:34.054445 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.202779 kubelet[2350]: E0307 01:16:34.185659 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.291730 kubelet[2350]: E0307 01:16:34.287744 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.392010 kubelet[2350]: E0307 01:16:34.388587 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.498515 kubelet[2350]: E0307 01:16:34.493000 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.604211 kubelet[2350]: E0307 01:16:34.602116 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.704405 kubelet[2350]: E0307 01:16:34.704340 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.813052 kubelet[2350]: E0307 01:16:34.808447 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:34.915063 kubelet[2350]: E0307 01:16:34.912114 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.145167 kubelet[2350]: E0307 01:16:35.139268 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.311555 kubelet[2350]: E0307 01:16:35.305066 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.408459 kubelet[2350]: E0307 01:16:35.408119 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.509685 kubelet[2350]: E0307 01:16:35.508694 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.615865 kubelet[2350]: E0307 01:16:35.615747 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.742024 kubelet[2350]: E0307 01:16:35.740552 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.841475 kubelet[2350]: E0307 01:16:35.841395 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:35.945361 kubelet[2350]: E0307 01:16:35.944123 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:36.206979 kubelet[2350]: E0307 01:16:36.166191 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:36.335843 kubelet[2350]: E0307 01:16:36.332692 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:36.663030 kubelet[2350]: E0307 01:16:36.642855 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:16:36.998131 kubelet[2350]: E0307 01:16:36.997043 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.110223 kubelet[2350]: E0307 01:16:37.109243 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.212274 kubelet[2350]: E0307 01:16:37.210052 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.420049 kubelet[2350]: E0307 01:16:37.314453 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.545052 kubelet[2350]: E0307 01:16:37.543675 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.646778 kubelet[2350]: E0307 01:16:37.646337 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.754535 kubelet[2350]: E0307 01:16:37.753346 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.865891 kubelet[2350]: E0307 01:16:37.857436 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:37.965769 kubelet[2350]: E0307 01:16:37.965679 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:38.082978 kubelet[2350]: E0307 01:16:38.075786 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:38.183256 kubelet[2350]: E0307 01:16:38.183038 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:38.458250 kubelet[2350]: E0307 01:16:38.442196 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:38.656396 kubelet[2350]: E0307 01:16:38.655189 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:38.797166 kubelet[2350]: E0307 01:16:38.796412 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:38.907208 kubelet[2350]: E0307 01:16:38.906481 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.010509 kubelet[2350]: E0307 01:16:39.008134 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.009540 systemd[1]: Reloading requested from client PID 2656 ('systemctl') (unit session-9.scope)... Mar 7 01:16:39.009562 systemd[1]: Reloading... Mar 7 01:16:39.111196 kubelet[2350]: E0307 01:16:39.108397 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.212239 kubelet[2350]: E0307 01:16:39.209198 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.310466 kubelet[2350]: E0307 01:16:39.310345 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.411122 kubelet[2350]: E0307 01:16:39.410868 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.533765 kubelet[2350]: E0307 01:16:39.531480 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.633514 kubelet[2350]: E0307 01:16:39.633376 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.747353 kubelet[2350]: E0307 01:16:39.743367 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:39.845223 kubelet[2350]: E0307 01:16:39.845171 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.042237 kubelet[2350]: E0307 01:16:40.008998 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.128912 kubelet[2350]: E0307 01:16:40.122247 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.224361 kubelet[2350]: E0307 01:16:40.224146 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.231847 zram_generator::config[2695]: No configuration found. Mar 7 01:16:40.329670 kubelet[2350]: E0307 01:16:40.324694 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.380050 kubelet[2350]: E0307 01:16:40.377136 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:40.380050 kubelet[2350]: E0307 01:16:40.377564 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:40.439460 kubelet[2350]: E0307 01:16:40.431880 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.534219 kubelet[2350]: E0307 01:16:40.534147 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.644463 kubelet[2350]: E0307 01:16:40.643891 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.765174 kubelet[2350]: E0307 01:16:40.760586 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.867446 kubelet[2350]: E0307 01:16:40.867332 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:40.970746 kubelet[2350]: E0307 01:16:40.970294 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:41.139020 kubelet[2350]: E0307 01:16:41.129504 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:41.739310 kubelet[2350]: E0307 01:16:41.733335 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:42.706977 kubelet[2350]: E0307 01:16:42.524557 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:44.221477 kubelet[2350]: E0307 01:16:44.213092 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:44.221477 kubelet[2350]: E0307 01:16:44.221122 2350 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 7 01:16:44.265794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 01:16:46.913241 kubelet[2350]: E0307 01:16:46.912120 2350 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 7 01:16:47.037998 kubelet[2350]: E0307 01:16:47.019418 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:47.037998 kubelet[2350]: E0307 01:16:47.019862 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:47.037998 kubelet[2350]: E0307 01:16:47.020334 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:47.053005 kubelet[2350]: E0307 01:16:47.015768 2350 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 7 01:16:47.053005 kubelet[2350]: E0307 01:16:47.048485 2350 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:16:47.121663 kubelet[2350]: E0307 01:16:47.121605 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:47.225549 kubelet[2350]: E0307 01:16:47.224262 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:47.325790 kubelet[2350]: E0307 01:16:47.325189 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:47.393656 systemd[1]: Reloading finished in 8383 ms. Mar 7 01:16:47.427117 kubelet[2350]: E0307 01:16:47.425548 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:47.527030 kubelet[2350]: E0307 01:16:47.526693 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:48.157767 update_engine[1464]: I20260307 01:16:47.701565 1464 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 01:16:48.157767 update_engine[1464]: I20260307 01:16:48.219201 1464 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 01:16:48.536996 update_engine[1464]: I20260307 01:16:48.339679 1464 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 01:16:50.232328 kubelet[2350]: E0307 01:16:48.460896 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:50.623600 update_engine[1464]: I20260307 01:16:50.620083 1464 omaha_request_params.cc:62] Current group set to lts Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.620785 1464 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.671092 1464 update_attempter.cc:643] Scheduling an action processor start. Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.671139 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.671307 1464 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.671612 1464 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.671638 1464 omaha_request_action.cc:272] Request: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: Mar 7 01:16:50.675297 update_engine[1464]: I20260307 01:16:50.671649 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:16:50.676737 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 01:16:50.740663 update_engine[1464]: I20260307 01:16:50.740613 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:16:50.755624 update_engine[1464]: I20260307 01:16:50.755512 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:16:50.767693 kubelet[2350]: E0307 01:16:50.767648 2350 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 7 01:16:50.798374 update_engine[1464]: E20260307 01:16:50.782315 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:16:50.805618 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:50.816220 update_engine[1464]: I20260307 01:16:50.814014 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 01:16:50.898735 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 01:16:50.899244 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:50.899318 systemd[1]: kubelet.service: Consumed 17.983s CPU time, 130.5M memory peak, 0B memory swap peak. Mar 7 01:16:50.950724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 01:16:54.572395 (kubelet)[2740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 01:16:54.587834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 01:16:56.879107 kubelet[2740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 01:16:56.926367 kubelet[2740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 01:16:56.926367 kubelet[2740]: I0307 01:16:56.917314 2740 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 01:16:57.041237 kubelet[2740]: I0307 01:16:57.040864 2740 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 7 01:16:57.041237 kubelet[2740]: I0307 01:16:57.041051 2740 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 01:16:57.041535 kubelet[2740]: I0307 01:16:57.041267 2740 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 01:16:57.041535 kubelet[2740]: I0307 01:16:57.041290 2740 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 01:16:57.068654 kubelet[2740]: I0307 01:16:57.068538 2740 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 01:16:57.110062 kubelet[2740]: I0307 01:16:57.109406 2740 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 01:16:57.188358 kubelet[2740]: I0307 01:16:57.171118 2740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 01:16:57.401855 kubelet[2740]: E0307 01:16:57.401745 2740 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 01:16:57.409071 kubelet[2740]: I0307 01:16:57.402603 2740 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 01:16:58.147659 kubelet[2740]: I0307 01:16:58.141246 2740 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 01:16:58.362755 kubelet[2740]: I0307 01:16:58.358916 2740 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 01:16:58.527700 kubelet[2740]: I0307 01:16:58.367518 2740 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 01:16:58.602414 kubelet[2740]: I0307 01:16:58.555693 2740 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 01:16:58.602414 kubelet[2740]: I0307 01:16:58.599233 2740 container_manager_linux.go:306] "Creating device plugin manager" Mar 7 01:16:58.613070 kubelet[2740]: I0307 01:16:58.612410 2740 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 01:16:58.654114 kubelet[2740]: I0307 01:16:58.646504 2740 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:16:58.657433 kubelet[2740]: I0307 01:16:58.657202 2740 kubelet.go:475] "Attempting to sync node with API server" Mar 7 01:16:58.657433 kubelet[2740]: I0307 01:16:58.657344 2740 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 01:16:58.657433 kubelet[2740]: I0307 01:16:58.657410 2740 kubelet.go:387] "Adding apiserver pod source" Mar 7 01:16:58.657677 kubelet[2740]: I0307 01:16:58.657447 2740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 01:16:59.925290 kubelet[2740]: I0307 01:16:59.922533 2740 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 01:16:59.948631 kubelet[2740]: I0307 01:16:59.933397 2740 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 01:16:59.948631 kubelet[2740]: I0307 01:16:59.933439 2740 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 01:17:00.007074 kubelet[2740]: I0307 01:17:00.006908 2740 server.go:1262] "Started kubelet" Mar 7 01:17:00.013259 kubelet[2740]: I0307 01:17:00.011807 2740 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 01:17:00.013259 kubelet[2740]: I0307 01:17:00.011896 2740 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 01:17:00.013259 kubelet[2740]: I0307 01:17:00.012651 2740 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 01:17:00.023516 kubelet[2740]: I0307 01:17:00.015085 2740 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 01:17:00.067116 kubelet[2740]: I0307 01:17:00.067042 2740 server.go:310] "Adding debug handlers to kubelet server" Mar 7 01:17:00.130721 kubelet[2740]: I0307 01:17:00.129817 2740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 01:17:00.140583 kubelet[2740]: I0307 01:17:00.136991 2740 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 01:17:00.157272 kubelet[2740]: I0307 01:17:00.139669 2740 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 7 01:17:00.158333 kubelet[2740]: I0307 01:17:00.139727 2740 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 01:17:00.158333 kubelet[2740]: I0307 01:17:00.158186 2740 reconciler.go:29] "Reconciler: start to sync state" Mar 7 01:17:00.164678 kubelet[2740]: E0307 01:17:00.159656 2740 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 01:17:00.178729 kubelet[2740]: I0307 01:17:00.167386 2740 factory.go:223] Registration of the systemd container factory successfully Mar 7 01:17:00.178729 kubelet[2740]: I0307 01:17:00.168914 2740 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 01:17:00.186755 kubelet[2740]: I0307 01:17:00.186639 2740 factory.go:223] Registration of the containerd container factory successfully Mar 7 01:17:00.842388 kubelet[2740]: I0307 01:17:00.826556 2740 apiserver.go:52] "Watching apiserver" Mar 7 01:17:01.494356 kubelet[2740]: I0307 01:17:01.483662 2740 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 01:17:01.693167 update_engine[1464]: I20260307 01:17:01.661034 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:01.744606 update_engine[1464]: I20260307 01:17:01.742719 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:01.750912 update_engine[1464]: I20260307 01:17:01.747820 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:01.762214 kubelet[2740]: I0307 01:17:01.760657 2740 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 01:17:01.777663 kubelet[2740]: I0307 01:17:01.773293 2740 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 7 01:17:01.777663 kubelet[2740]: I0307 01:17:01.773610 2740 kubelet.go:2428] "Starting kubelet main sync loop" Mar 7 01:17:01.777663 kubelet[2740]: E0307 01:17:01.773722 2740 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:17:01.795515 update_engine[1464]: E20260307 01:17:01.785347 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:01.807508 update_engine[1464]: I20260307 01:17:01.807245 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 01:17:01.898982 kubelet[2740]: E0307 01:17:01.896259 2740 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 01:17:02.337122 sudo[2777]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 7 01:17:03.200689 sudo[2777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 7 01:17:03.343527 kubelet[2740]: E0307 01:17:03.212382 2740 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:17:03.634116 kubelet[2740]: E0307 01:17:03.618440 2740 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.329711 2740 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.329735 2740 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.329821 2740 state_mem.go:36] "Initialized new in-memory state store" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330278 2740 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330295 2740 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330320 2740 policy_none.go:49] "None policy: Start" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330334 2740 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330349 2740 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330466 2740 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 01:17:04.330449 kubelet[2740]: I0307 01:17:04.330477 2740 policy_none.go:47] "Start" Mar 7 01:17:04.424850 kubelet[2740]: E0307 01:17:04.422353 2740 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 01:17:04.557045 kubelet[2740]: E0307 01:17:04.548311 2740 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 01:17:04.557045 kubelet[2740]: I0307 01:17:04.556557 2740 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 01:17:04.557045 kubelet[2740]: I0307 01:17:04.556602 2740 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 01:17:05.048386 kubelet[2740]: I0307 01:17:04.565687 2740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 01:17:05.213121 kubelet[2740]: E0307 01:17:05.192658 2740 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 01:17:05.223548 kubelet[2740]: I0307 01:17:05.223496 2740 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 01:17:05.241347 containerd[1478]: time="2026-03-07T01:17:05.241291376Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 01:17:05.268898 kubelet[2740]: I0307 01:17:05.252494 2740 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 01:17:05.737880 kubelet[2740]: I0307 01:17:05.734563 2740 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 7 01:17:05.876838 kubelet[2740]: I0307 01:17:05.866486 2740 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 7 01:17:05.876838 kubelet[2740]: I0307 01:17:05.866610 2740 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 7 01:17:06.044317 kubelet[2740]: I0307 01:17:06.031868 2740 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 7 01:17:06.044317 kubelet[2740]: I0307 01:17:06.033505 2740 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 7 01:17:06.044317 kubelet[2740]: I0307 01:17:06.038160 2740 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 7 01:17:06.061052 kubelet[2740]: I0307 01:17:06.058580 2740 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 01:17:06.086022 systemd[1]: Created slice kubepods-besteffort-pod7623562d_3c2e_420a_96e9_76612b8b591c.slice - libcontainer container kubepods-besteffort-pod7623562d_3c2e_420a_96e9_76612b8b591c.slice. Mar 7 01:17:06.126893 kubelet[2740]: I0307 01:17:06.116351 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f54a05ccde2a0003764b2d7cbdcc31bf-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f54a05ccde2a0003764b2d7cbdcc31bf\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:17:06.126893 kubelet[2740]: I0307 01:17:06.116414 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f54a05ccde2a0003764b2d7cbdcc31bf-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f54a05ccde2a0003764b2d7cbdcc31bf\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:17:06.126893 kubelet[2740]: I0307 01:17:06.116453 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f54a05ccde2a0003764b2d7cbdcc31bf-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f54a05ccde2a0003764b2d7cbdcc31bf\") " pod="kube-system/kube-apiserver-localhost" Mar 7 01:17:06.126893 kubelet[2740]: I0307 01:17:06.116486 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:17:06.126893 kubelet[2740]: I0307 01:17:06.116515 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbltw\" (UniqueName: \"kubernetes.io/projected/7623562d-3c2e-420a-96e9-76612b8b591c-kube-api-access-hbltw\") pod \"kube-proxy-pmsdg\" (UID: \"7623562d-3c2e-420a-96e9-76612b8b591c\") " pod="kube-system/kube-proxy-pmsdg" Mar 7 01:17:06.128287 kubelet[2740]: I0307 01:17:06.116542 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:17:06.128287 kubelet[2740]: I0307 01:17:06.116565 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:17:06.128287 kubelet[2740]: I0307 01:17:06.116591 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:17:06.128287 kubelet[2740]: I0307 01:17:06.116618 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 7 01:17:06.128287 kubelet[2740]: I0307 01:17:06.116643 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 7 01:17:06.128586 kubelet[2740]: I0307 01:17:06.116818 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7623562d-3c2e-420a-96e9-76612b8b591c-kube-proxy\") pod \"kube-proxy-pmsdg\" (UID: \"7623562d-3c2e-420a-96e9-76612b8b591c\") " pod="kube-system/kube-proxy-pmsdg" Mar 7 01:17:06.128586 kubelet[2740]: I0307 01:17:06.116855 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7623562d-3c2e-420a-96e9-76612b8b591c-xtables-lock\") pod \"kube-proxy-pmsdg\" (UID: \"7623562d-3c2e-420a-96e9-76612b8b591c\") " pod="kube-system/kube-proxy-pmsdg" Mar 7 01:17:06.128586 kubelet[2740]: I0307 01:17:06.116881 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7623562d-3c2e-420a-96e9-76612b8b591c-lib-modules\") pod \"kube-proxy-pmsdg\" (UID: \"7623562d-3c2e-420a-96e9-76612b8b591c\") " pod="kube-system/kube-proxy-pmsdg" Mar 7 01:17:06.787712 kubelet[2740]: E0307 01:17:06.406416 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:06.787712 kubelet[2740]: E0307 01:17:06.406639 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:06.787712 kubelet[2740]: E0307 01:17:06.439289 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:07.072698 kubelet[2740]: E0307 01:17:07.072341 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:07.086020 containerd[1478]: time="2026-03-07T01:17:07.085885203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pmsdg,Uid:7623562d-3c2e-420a-96e9-76612b8b591c,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:07.896358 kubelet[2740]: E0307 01:17:07.896103 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:07.915486 kubelet[2740]: E0307 01:17:07.915273 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:07.921218 kubelet[2740]: E0307 01:17:07.921010 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:08.752417 kubelet[2740]: I0307 01:17:08.721394 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.721328973 podStartE2EDuration="2.721328973s" podCreationTimestamp="2026-03-07 01:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:08.703270677 +0000 UTC m=+14.028033363" watchObservedRunningTime="2026-03-07 01:17:08.721328973 +0000 UTC m=+14.046091660" Mar 7 01:17:08.771073 kubelet[2740]: I0307 01:17:08.758378 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.758349905 podStartE2EDuration="2.758349905s" podCreationTimestamp="2026-03-07 01:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:08.023912329 +0000 UTC m=+13.348675025" watchObservedRunningTime="2026-03-07 01:17:08.758349905 +0000 UTC m=+14.083112600" Mar 7 01:17:09.330032 kubelet[2740]: E0307 01:17:09.327245 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:09.330032 kubelet[2740]: E0307 01:17:09.328820 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:10.561190 containerd[1478]: time="2026-03-07T01:17:10.511642959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:10.561190 containerd[1478]: time="2026-03-07T01:17:10.511783230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:10.561190 containerd[1478]: time="2026-03-07T01:17:10.511798929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:10.561190 containerd[1478]: time="2026-03-07T01:17:10.512586352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:10.702103 kubelet[2740]: I0307 01:17:10.671210 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.67118391 podStartE2EDuration="4.67118391s" podCreationTimestamp="2026-03-07 01:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:09.422703185 +0000 UTC m=+14.747465871" watchObservedRunningTime="2026-03-07 01:17:10.67118391 +0000 UTC m=+15.995946576" Mar 7 01:17:12.278269 update_engine[1464]: I20260307 01:17:12.114432 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:12.278269 update_engine[1464]: I20260307 01:17:12.121321 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:12.278269 update_engine[1464]: I20260307 01:17:12.218043 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:12.259535 systemd[1]: Started cri-containerd-ce911e89213c96f6b96d1a37d6638fb241b94e2c753dcae6972d1635b4fd4e8f.scope - libcontainer container ce911e89213c96f6b96d1a37d6638fb241b94e2c753dcae6972d1635b4fd4e8f. Mar 7 01:17:12.337320 update_engine[1464]: E20260307 01:17:12.302139 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:12.337320 update_engine[1464]: I20260307 01:17:12.302260 1464 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 01:17:14.787112 containerd[1478]: time="2026-03-07T01:17:14.761652263Z" level=error msg="get state for ce911e89213c96f6b96d1a37d6638fb241b94e2c753dcae6972d1635b4fd4e8f" error="context deadline exceeded: unknown" Mar 7 01:17:14.787112 containerd[1478]: time="2026-03-07T01:17:14.762479990Z" level=warning msg="unknown status" status=0 Mar 7 01:17:15.743607 kubelet[2740]: E0307 01:17:15.739202 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:15.743607 kubelet[2740]: E0307 01:17:15.740038 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:15.830197 kubelet[2740]: E0307 01:17:15.830010 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:16.427009 kubelet[2740]: E0307 01:17:16.422578 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:16.503833 containerd[1478]: time="2026-03-07T01:17:16.503739914Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 7 01:17:17.419431 kubelet[2740]: E0307 01:17:17.349407 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:18.417470 sudo[2777]: pam_unix(sudo:session): session closed for user root Mar 7 01:17:18.455142 containerd[1478]: time="2026-03-07T01:17:18.447287949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pmsdg,Uid:7623562d-3c2e-420a-96e9-76612b8b591c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce911e89213c96f6b96d1a37d6638fb241b94e2c753dcae6972d1635b4fd4e8f\"" Mar 7 01:17:18.496834 kubelet[2740]: E0307 01:17:18.495433 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:18.811505 containerd[1478]: time="2026-03-07T01:17:18.809568529Z" level=info msg="CreateContainer within sandbox \"ce911e89213c96f6b96d1a37d6638fb241b94e2c753dcae6972d1635b4fd4e8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 01:17:19.724341 containerd[1478]: time="2026-03-07T01:17:19.724220026Z" level=info msg="CreateContainer within sandbox \"ce911e89213c96f6b96d1a37d6638fb241b94e2c753dcae6972d1635b4fd4e8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0a99632dae79680189c6af9a5481ac576c87d86997bba0735c88f5dabeadd5e\"" Mar 7 01:17:19.732890 containerd[1478]: time="2026-03-07T01:17:19.728298048Z" level=info msg="StartContainer for \"d0a99632dae79680189c6af9a5481ac576c87d86997bba0735c88f5dabeadd5e\"" Mar 7 01:17:22.665843 update_engine[1464]: I20260307 01:17:22.632148 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:22.665843 update_engine[1464]: I20260307 01:17:22.654007 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:22.665843 update_engine[1464]: I20260307 01:17:22.654681 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:22.754153 update_engine[1464]: E20260307 01:17:22.741896 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742117 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742185 1464 omaha_request_action.cc:617] Omaha request response: Mar 7 01:17:22.754153 update_engine[1464]: E20260307 01:17:22.742506 1464 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742780 1464 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742805 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742818 1464 update_attempter.cc:306] Processing Done. Mar 7 01:17:22.754153 update_engine[1464]: E20260307 01:17:22.742842 1464 update_attempter.cc:619] Update failed. Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742858 1464 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742871 1464 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.742883 1464 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.743072 1464 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.743191 1464 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 01:17:22.754153 update_engine[1464]: I20260307 01:17:22.743216 1464 omaha_request_action.cc:272] Request: Mar 7 01:17:22.754153 update_engine[1464]: Mar 7 01:17:22.754153 update_engine[1464]: Mar 7 01:17:22.755028 update_engine[1464]: Mar 7 01:17:22.755028 update_engine[1464]: Mar 7 01:17:22.755028 update_engine[1464]: Mar 7 01:17:22.755028 update_engine[1464]: Mar 7 01:17:22.755028 update_engine[1464]: I20260307 01:17:22.743232 1464 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 01:17:22.755028 update_engine[1464]: I20260307 01:17:22.743600 1464 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 01:17:22.765099 update_engine[1464]: I20260307 01:17:22.763770 1464 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 01:17:22.801353 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 01:17:22.816983 update_engine[1464]: E20260307 01:17:22.811368 1464 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811499 1464 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811522 1464 omaha_request_action.cc:617] Omaha request response: Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811539 1464 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811555 1464 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811568 1464 update_attempter.cc:306] Processing Done. Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811582 1464 update_attempter.cc:310] Error event sent. Mar 7 01:17:22.816983 update_engine[1464]: I20260307 01:17:22.811604 1464 update_check_scheduler.cc:74] Next update check in 45m59s Mar 7 01:17:22.845422 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 01:17:24.985120 systemd[1]: Started cri-containerd-d0a99632dae79680189c6af9a5481ac576c87d86997bba0735c88f5dabeadd5e.scope - libcontainer container d0a99632dae79680189c6af9a5481ac576c87d86997bba0735c88f5dabeadd5e. Mar 7 01:17:27.268567 containerd[1478]: time="2026-03-07T01:17:27.255377519Z" level=error msg="get state for d0a99632dae79680189c6af9a5481ac576c87d86997bba0735c88f5dabeadd5e" error="context deadline exceeded: unknown" Mar 7 01:17:27.268567 containerd[1478]: time="2026-03-07T01:17:27.286410951Z" level=warning msg="unknown status" status=0 Mar 7 01:17:27.941244 containerd[1478]: time="2026-03-07T01:17:27.941142440Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 7 01:17:29.339527 containerd[1478]: time="2026-03-07T01:17:29.335337598Z" level=info msg="StartContainer for \"d0a99632dae79680189c6af9a5481ac576c87d86997bba0735c88f5dabeadd5e\" returns successfully" Mar 7 01:17:29.882745 kubelet[2740]: E0307 01:17:29.864261 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:31.556502 kubelet[2740]: E0307 01:17:31.528860 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:36.027840 kubelet[2740]: I0307 01:17:36.027296 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pmsdg" podStartSLOduration=35.027273312 podStartE2EDuration="35.027273312s" podCreationTimestamp="2026-03-07 01:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:17:30.080732672 +0000 UTC m=+35.405495378" watchObservedRunningTime="2026-03-07 01:17:36.027273312 +0000 UTC m=+41.352035998" Mar 7 01:17:36.097070 systemd[1]: Created slice kubepods-burstable-podc0367950_f8de_4cea_8cbc_20a8d9150e54.slice - libcontainer container kubepods-burstable-podc0367950_f8de_4cea_8cbc_20a8d9150e54.slice. Mar 7 01:17:36.168821 kubelet[2740]: I0307 01:17:36.167104 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-run\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.168821 kubelet[2740]: I0307 01:17:36.167163 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-bpf-maps\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.168821 kubelet[2740]: I0307 01:17:36.167186 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-cgroup\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.168821 kubelet[2740]: I0307 01:17:36.167207 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-etc-cni-netd\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.168821 kubelet[2740]: I0307 01:17:36.167236 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-hostproc\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.168821 kubelet[2740]: I0307 01:17:36.167259 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cni-path\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.197668 kubelet[2740]: I0307 01:17:36.167357 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-lib-modules\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.183767 systemd[1]: Created slice kubepods-besteffort-pod8b3fe19e_aafe_42f5_be8f_025558c799ca.slice - libcontainer container kubepods-besteffort-pod8b3fe19e_aafe_42f5_be8f_025558c799ca.slice. Mar 7 01:17:36.405178 kubelet[2740]: I0307 01:17:36.343323 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-config-path\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.435906 kubelet[2740]: I0307 01:17:36.416429 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0367950-f8de-4cea-8cbc-20a8d9150e54-clustermesh-secrets\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.435906 kubelet[2740]: I0307 01:17:36.421626 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-hubble-tls\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.435906 kubelet[2740]: I0307 01:17:36.422318 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-kernel\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.435906 kubelet[2740]: I0307 01:17:36.422364 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsvjs\" (UniqueName: \"kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-kube-api-access-nsvjs\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.435906 kubelet[2740]: I0307 01:17:36.422433 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b3fe19e-aafe-42f5-be8f-025558c799ca-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-nwscc\" (UID: \"8b3fe19e-aafe-42f5-be8f-025558c799ca\") " pod="kube-system/cilium-operator-6f9c7c5859-nwscc" Mar 7 01:17:36.476392 kubelet[2740]: I0307 01:17:36.422840 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bs77\" (UniqueName: \"kubernetes.io/projected/8b3fe19e-aafe-42f5-be8f-025558c799ca-kube-api-access-2bs77\") pod \"cilium-operator-6f9c7c5859-nwscc\" (UID: \"8b3fe19e-aafe-42f5-be8f-025558c799ca\") " pod="kube-system/cilium-operator-6f9c7c5859-nwscc" Mar 7 01:17:36.476392 kubelet[2740]: I0307 01:17:36.422915 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-xtables-lock\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:36.476392 kubelet[2740]: I0307 01:17:36.475327 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-net\") pod \"cilium-6hzq5\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " pod="kube-system/cilium-6hzq5" Mar 7 01:17:39.042208 kubelet[2740]: E0307 01:17:39.039417 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:39.088006 containerd[1478]: time="2026-03-07T01:17:39.087631962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hzq5,Uid:c0367950-f8de-4cea-8cbc-20a8d9150e54,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:43.389390 kubelet[2740]: E0307 01:17:43.387793 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:44.835730 containerd[1478]: time="2026-03-07T01:17:43.259864054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:44.835730 containerd[1478]: time="2026-03-07T01:17:43.260094520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:44.835730 containerd[1478]: time="2026-03-07T01:17:43.260123374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:44.835730 containerd[1478]: time="2026-03-07T01:17:43.260295283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:45.212581 containerd[1478]: time="2026-03-07T01:17:45.177834678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-nwscc,Uid:8b3fe19e-aafe-42f5-be8f-025558c799ca,Namespace:kube-system,Attempt:0,}" Mar 7 01:17:45.583019 kubelet[2740]: E0307 01:17:45.571262 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.745s" Mar 7 01:17:49.373578 kubelet[2740]: E0307 01:17:49.364912 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.761s" Mar 7 01:17:50.759450 systemd[1]: run-containerd-runc-k8s.io-8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6-runc.YK885C.mount: Deactivated successfully. Mar 7 01:17:50.907094 systemd[1]: Started cri-containerd-8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6.scope - libcontainer container 8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6. Mar 7 01:17:52.973869 containerd[1478]: time="2026-03-07T01:17:52.959317417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:17:52.973869 containerd[1478]: time="2026-03-07T01:17:52.961612289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:17:52.973869 containerd[1478]: time="2026-03-07T01:17:52.961646282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:53.010897 containerd[1478]: time="2026-03-07T01:17:53.010674672Z" level=error msg="get state for 8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6" error="context deadline exceeded: unknown" Mar 7 01:17:53.011522 containerd[1478]: time="2026-03-07T01:17:53.011304149Z" level=warning msg="unknown status" status=0 Mar 7 01:17:53.012115 containerd[1478]: time="2026-03-07T01:17:53.007180799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:17:53.136828 kubelet[2740]: E0307 01:17:53.134569 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.285s" Mar 7 01:17:53.935414 containerd[1478]: time="2026-03-07T01:17:53.935211266Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Mar 7 01:17:54.156243 systemd[1]: Started cri-containerd-19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086.scope - libcontainer container 19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086. Mar 7 01:17:54.389215 containerd[1478]: time="2026-03-07T01:17:54.389150974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hzq5,Uid:c0367950-f8de-4cea-8cbc-20a8d9150e54,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\"" Mar 7 01:17:54.407477 kubelet[2740]: E0307 01:17:54.405891 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:17:54.481529 containerd[1478]: time="2026-03-07T01:17:54.464103048Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 7 01:17:55.119411 containerd[1478]: time="2026-03-07T01:17:55.118885139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-nwscc,Uid:8b3fe19e-aafe-42f5-be8f-025558c799ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\"" Mar 7 01:17:55.120790 kubelet[2740]: E0307 01:17:55.120753 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:18:13.564318 kubelet[2740]: E0307 01:18:13.554818 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.823s" Mar 7 01:18:18.041208 kubelet[2740]: E0307 01:18:18.040788 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.789s" Mar 7 01:18:20.864581 kubelet[2740]: E0307 01:18:20.864360 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:18:28.345545 kubelet[2740]: E0307 01:18:28.342173 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.814s" Mar 7 01:18:32.254192 kubelet[2740]: E0307 01:18:32.254053 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.123s" Mar 7 01:18:36.097660 kubelet[2740]: E0307 01:18:36.080754 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.276s" Mar 7 01:18:37.807628 kubelet[2740]: E0307 01:18:37.804655 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.707s" Mar 7 01:18:39.023042 kubelet[2740]: E0307 01:18:39.005473 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:18:40.784841 kubelet[2740]: E0307 01:18:40.780863 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:18:48.765591 kubelet[2740]: E0307 01:18:48.763163 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.734s" Mar 7 01:18:48.801026 kubelet[2740]: E0307 01:18:48.800917 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:00.167384 kubelet[2740]: E0307 01:19:00.167088 2740 kubelet_node_status.go:398] "Node not becoming ready in time after startup" Mar 7 01:19:03.796774 kubelet[2740]: E0307 01:19:03.791215 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:08.799745 kubelet[2740]: E0307 01:19:08.798881 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:13.810655 kubelet[2740]: E0307 01:19:13.801375 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:18.859289 kubelet[2740]: E0307 01:19:18.858808 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:22.495224 kubelet[2740]: E0307 01:19:22.483330 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.541s" Mar 7 01:19:23.888434 kubelet[2740]: E0307 01:19:23.886167 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:24.787182 kubelet[2740]: E0307 01:19:24.781011 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:29.259753 kubelet[2740]: E0307 01:19:29.244081 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:33.709697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445903255.mount: Deactivated successfully. Mar 7 01:19:34.249390 kubelet[2740]: E0307 01:19:34.247541 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:39.266453 kubelet[2740]: E0307 01:19:39.266234 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:39.801891 kubelet[2740]: E0307 01:19:39.799695 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:44.272573 kubelet[2740]: E0307 01:19:44.272523 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:49.277582 kubelet[2740]: E0307 01:19:49.277205 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:49.809381 kubelet[2740]: E0307 01:19:49.806642 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:54.296767 kubelet[2740]: E0307 01:19:54.296054 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:55.918392 containerd[1478]: time="2026-03-07T01:19:55.914107115Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:19:55.927557 containerd[1478]: time="2026-03-07T01:19:55.927465546Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 7 01:19:55.931034 containerd[1478]: time="2026-03-07T01:19:55.930733687Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:19:55.943120 containerd[1478]: time="2026-03-07T01:19:55.939579823Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 2m1.475397318s" Mar 7 01:19:55.943120 containerd[1478]: time="2026-03-07T01:19:55.939642009Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 7 01:19:55.957639 containerd[1478]: time="2026-03-07T01:19:55.952677835Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 7 01:19:55.969273 containerd[1478]: time="2026-03-07T01:19:55.968348297Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:19:56.079610 containerd[1478]: time="2026-03-07T01:19:56.076709757Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f\"" Mar 7 01:19:56.093192 containerd[1478]: time="2026-03-07T01:19:56.091317072Z" level=info msg="StartContainer for \"ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f\"" Mar 7 01:19:56.419450 systemd[1]: Started cri-containerd-ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f.scope - libcontainer container ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f. Mar 7 01:19:56.800254 containerd[1478]: time="2026-03-07T01:19:56.794683131Z" level=info msg="StartContainer for \"ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f\" returns successfully" Mar 7 01:19:56.815502 kubelet[2740]: E0307 01:19:56.813758 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:56.902699 systemd[1]: cri-containerd-ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f.scope: Deactivated successfully. Mar 7 01:19:57.055773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f-rootfs.mount: Deactivated successfully. Mar 7 01:19:57.304329 containerd[1478]: time="2026-03-07T01:19:57.304205547Z" level=info msg="shim disconnected" id=ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f namespace=k8s.io Mar 7 01:19:57.306333 containerd[1478]: time="2026-03-07T01:19:57.305350868Z" level=warning msg="cleaning up after shim disconnected" id=ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f namespace=k8s.io Mar 7 01:19:57.306333 containerd[1478]: time="2026-03-07T01:19:57.305419376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:19:57.837461 kubelet[2740]: E0307 01:19:57.830862 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:57.838492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1347828248.mount: Deactivated successfully. Mar 7 01:19:57.940389 containerd[1478]: time="2026-03-07T01:19:57.939540168Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:19:58.161497 containerd[1478]: time="2026-03-07T01:19:58.158074798Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882\"" Mar 7 01:19:58.173753 containerd[1478]: time="2026-03-07T01:19:58.169293437Z" level=info msg="StartContainer for \"22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882\"" Mar 7 01:19:58.502508 systemd[1]: Started cri-containerd-22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882.scope - libcontainer container 22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882. Mar 7 01:19:58.734131 containerd[1478]: time="2026-03-07T01:19:58.728584832Z" level=info msg="StartContainer for \"22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882\" returns successfully" Mar 7 01:19:58.839685 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 01:19:58.840711 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:19:58.840833 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:19:58.889776 kubelet[2740]: E0307 01:19:58.887700 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:58.948067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 01:19:58.949085 systemd[1]: cri-containerd-22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882.scope: Deactivated successfully. Mar 7 01:19:59.318519 kubelet[2740]: E0307 01:19:59.315868 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:19:59.325629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882-rootfs.mount: Deactivated successfully. Mar 7 01:19:59.360044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 01:19:59.430839 containerd[1478]: time="2026-03-07T01:19:59.428079388Z" level=info msg="shim disconnected" id=22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882 namespace=k8s.io Mar 7 01:19:59.430839 containerd[1478]: time="2026-03-07T01:19:59.428193351Z" level=warning msg="cleaning up after shim disconnected" id=22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882 namespace=k8s.io Mar 7 01:19:59.430839 containerd[1478]: time="2026-03-07T01:19:59.428214051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:19:59.897762 kubelet[2740]: E0307 01:19:59.896254 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:19:59.993233 containerd[1478]: time="2026-03-07T01:19:59.986764587Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:20:00.142303 containerd[1478]: time="2026-03-07T01:20:00.136694495Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41\"" Mar 7 01:20:00.158546 containerd[1478]: time="2026-03-07T01:20:00.155330779Z" level=info msg="StartContainer for \"3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41\"" Mar 7 01:20:00.388854 systemd[1]: run-containerd-runc-k8s.io-3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41-runc.FUmfgX.mount: Deactivated successfully. Mar 7 01:20:00.420496 systemd[1]: Started cri-containerd-3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41.scope - libcontainer container 3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41. Mar 7 01:20:00.649092 containerd[1478]: time="2026-03-07T01:20:00.648753723Z" level=info msg="StartContainer for \"3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41\" returns successfully" Mar 7 01:20:00.706879 systemd[1]: cri-containerd-3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41.scope: Deactivated successfully. Mar 7 01:20:00.853769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41-rootfs.mount: Deactivated successfully. Mar 7 01:20:00.913015 kubelet[2740]: E0307 01:20:00.912744 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:00.944380 containerd[1478]: time="2026-03-07T01:20:00.943583828Z" level=info msg="shim disconnected" id=3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41 namespace=k8s.io Mar 7 01:20:00.944380 containerd[1478]: time="2026-03-07T01:20:00.943686740Z" level=warning msg="cleaning up after shim disconnected" id=3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41 namespace=k8s.io Mar 7 01:20:00.944380 containerd[1478]: time="2026-03-07T01:20:00.943705535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:20:01.940370 kubelet[2740]: E0307 01:20:01.938123 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:01.986755 containerd[1478]: time="2026-03-07T01:20:01.986185918Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:20:02.170160 containerd[1478]: time="2026-03-07T01:20:02.169389721Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20\"" Mar 7 01:20:02.184847 containerd[1478]: time="2026-03-07T01:20:02.183346105Z" level=info msg="StartContainer for \"1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20\"" Mar 7 01:20:02.703492 systemd[1]: Started cri-containerd-1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20.scope - libcontainer container 1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20. Mar 7 01:20:03.214362 systemd[1]: cri-containerd-1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20.scope: Deactivated successfully. Mar 7 01:20:03.254391 containerd[1478]: time="2026-03-07T01:20:03.252846369Z" level=info msg="StartContainer for \"1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20\" returns successfully" Mar 7 01:20:03.434110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20-rootfs.mount: Deactivated successfully. Mar 7 01:20:03.499722 containerd[1478]: time="2026-03-07T01:20:03.499640734Z" level=info msg="shim disconnected" id=1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20 namespace=k8s.io Mar 7 01:20:03.500212 containerd[1478]: time="2026-03-07T01:20:03.500175462Z" level=warning msg="cleaning up after shim disconnected" id=1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20 namespace=k8s.io Mar 7 01:20:03.500316 containerd[1478]: time="2026-03-07T01:20:03.500297100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:20:03.724265 containerd[1478]: time="2026-03-07T01:20:03.721360103Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:20:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:20:04.017199 kubelet[2740]: E0307 01:20:04.014403 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:04.089037 containerd[1478]: time="2026-03-07T01:20:04.083447472Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:20:04.250223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126712277.mount: Deactivated successfully. Mar 7 01:20:04.324138 containerd[1478]: time="2026-03-07T01:20:04.320908528Z" level=info msg="CreateContainer within sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\"" Mar 7 01:20:04.341406 containerd[1478]: time="2026-03-07T01:20:04.332581984Z" level=info msg="StartContainer for \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\"" Mar 7 01:20:04.341583 kubelet[2740]: E0307 01:20:04.334150 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:20:04.698287 systemd[1]: Started cri-containerd-0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56.scope - libcontainer container 0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56. Mar 7 01:20:05.124065 containerd[1478]: time="2026-03-07T01:20:05.105101072Z" level=info msg="StartContainer for \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\" returns successfully" Mar 7 01:20:05.131622 containerd[1478]: time="2026-03-07T01:20:05.130527245Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:05.156329 containerd[1478]: time="2026-03-07T01:20:05.151636215Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 7 01:20:05.165097 containerd[1478]: time="2026-03-07T01:20:05.162412574Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 01:20:05.167812 containerd[1478]: time="2026-03-07T01:20:05.167759992Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 9.21502518s" Mar 7 01:20:05.170172 containerd[1478]: time="2026-03-07T01:20:05.170136832Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 7 01:20:05.350013 containerd[1478]: time="2026-03-07T01:20:05.347752760Z" level=info msg="CreateContainer within sandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 7 01:20:05.505621 containerd[1478]: time="2026-03-07T01:20:05.505545592Z" level=info msg="CreateContainer within sandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\"" Mar 7 01:20:05.552424 containerd[1478]: time="2026-03-07T01:20:05.545113470Z" level=info msg="StartContainer for \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\"" Mar 7 01:20:06.027808 systemd[1]: run-containerd-runc-k8s.io-314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96-runc.qxXPPk.mount: Deactivated successfully. Mar 7 01:20:06.091514 systemd[1]: Started cri-containerd-314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96.scope - libcontainer container 314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96. Mar 7 01:20:06.484116 containerd[1478]: time="2026-03-07T01:20:06.482369164Z" level=info msg="StartContainer for \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\" returns successfully" Mar 7 01:20:07.410721 kubelet[2740]: E0307 01:20:07.409310 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:07.435043 kubelet[2740]: E0307 01:20:07.424899 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:08.452370 kubelet[2740]: E0307 01:20:08.452271 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:08.460090 kubelet[2740]: E0307 01:20:08.456270 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:08.508272 kubelet[2740]: I0307 01:20:08.498598 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6hzq5" podStartSLOduration=31.994094006 podStartE2EDuration="2m33.498572742s" podCreationTimestamp="2026-03-07 01:17:35 +0000 UTC" firstStartedPulling="2026-03-07 01:17:54.443139862 +0000 UTC m=+59.767902608" lastFinishedPulling="2026-03-07 01:19:55.947618647 +0000 UTC m=+181.272381344" observedRunningTime="2026-03-07 01:20:08.378498854 +0000 UTC m=+193.703261549" watchObservedRunningTime="2026-03-07 01:20:08.498572742 +0000 UTC m=+193.823335428" Mar 7 01:20:08.711454 kubelet[2740]: I0307 01:20:08.707745 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-nwscc" podStartSLOduration=23.638569476 podStartE2EDuration="2m33.698148943s" podCreationTimestamp="2026-03-07 01:17:35 +0000 UTC" firstStartedPulling="2026-03-07 01:17:55.129830826 +0000 UTC m=+60.454593492" lastFinishedPulling="2026-03-07 01:20:05.189410293 +0000 UTC m=+190.514172959" observedRunningTime="2026-03-07 01:20:08.697078103 +0000 UTC m=+194.021840799" watchObservedRunningTime="2026-03-07 01:20:08.698148943 +0000 UTC m=+194.022911629" Mar 7 01:20:08.785032 kubelet[2740]: E0307 01:20:08.774907 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:09.464111 kubelet[2740]: E0307 01:20:09.459906 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:10.509137 kubelet[2740]: E0307 01:20:10.506845 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:14.530731 systemd[1]: Created slice kubepods-burstable-podb8e6ec16_db69_4194_9a12_7eef8b7cd856.slice - libcontainer container kubepods-burstable-podb8e6ec16_db69_4194_9a12_7eef8b7cd856.slice. Mar 7 01:20:14.597293 systemd[1]: Created slice kubepods-burstable-pod42d9f658_145a_454a_a7ad_5d9c52ed7336.slice - libcontainer container kubepods-burstable-pod42d9f658_145a_454a_a7ad_5d9c52ed7336.slice. Mar 7 01:20:14.609732 kubelet[2740]: I0307 01:20:14.609685 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e6ec16-db69-4194-9a12-7eef8b7cd856-config-volume\") pod \"coredns-66bc5c9577-7b98l\" (UID: \"b8e6ec16-db69-4194-9a12-7eef8b7cd856\") " pod="kube-system/coredns-66bc5c9577-7b98l" Mar 7 01:20:14.610705 kubelet[2740]: I0307 01:20:14.610528 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrm4\" (UniqueName: \"kubernetes.io/projected/b8e6ec16-db69-4194-9a12-7eef8b7cd856-kube-api-access-hlrm4\") pod \"coredns-66bc5c9577-7b98l\" (UID: \"b8e6ec16-db69-4194-9a12-7eef8b7cd856\") " pod="kube-system/coredns-66bc5c9577-7b98l" Mar 7 01:20:14.610705 kubelet[2740]: I0307 01:20:14.610576 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42d9f658-145a-454a-a7ad-5d9c52ed7336-config-volume\") pod \"coredns-66bc5c9577-bnmq5\" (UID: \"42d9f658-145a-454a-a7ad-5d9c52ed7336\") " pod="kube-system/coredns-66bc5c9577-bnmq5" Mar 7 01:20:14.610705 kubelet[2740]: I0307 01:20:14.610598 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49g7\" (UniqueName: \"kubernetes.io/projected/42d9f658-145a-454a-a7ad-5d9c52ed7336-kube-api-access-r49g7\") pod \"coredns-66bc5c9577-bnmq5\" (UID: \"42d9f658-145a-454a-a7ad-5d9c52ed7336\") " pod="kube-system/coredns-66bc5c9577-bnmq5" Mar 7 01:20:15.206099 kubelet[2740]: E0307 01:20:15.202557 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:15.267517 kubelet[2740]: E0307 01:20:15.267324 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:15.310603 containerd[1478]: time="2026-03-07T01:20:15.310170708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7b98l,Uid:b8e6ec16-db69-4194-9a12-7eef8b7cd856,Namespace:kube-system,Attempt:0,}" Mar 7 01:20:15.351165 containerd[1478]: time="2026-03-07T01:20:15.349849844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bnmq5,Uid:42d9f658-145a-454a-a7ad-5d9c52ed7336,Namespace:kube-system,Attempt:0,}" Mar 7 01:20:17.152387 systemd-networkd[1397]: cilium_host: Link UP Mar 7 01:20:17.161553 systemd-networkd[1397]: cilium_net: Link UP Mar 7 01:20:17.161568 systemd-networkd[1397]: cilium_net: Gained carrier Mar 7 01:20:17.168382 systemd-networkd[1397]: cilium_host: Gained carrier Mar 7 01:20:18.107233 systemd-networkd[1397]: cilium_net: Gained IPv6LL Mar 7 01:20:18.107724 systemd-networkd[1397]: cilium_host: Gained IPv6LL Mar 7 01:20:18.143821 systemd-networkd[1397]: cilium_vxlan: Link UP Mar 7 01:20:18.143866 systemd-networkd[1397]: cilium_vxlan: Gained carrier Mar 7 01:20:19.619440 kernel: NET: Registered PF_ALG protocol family Mar 7 01:20:19.891835 systemd-networkd[1397]: cilium_vxlan: Gained IPv6LL Mar 7 01:20:20.496547 systemd[1]: run-containerd-runc-k8s.io-0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56-runc.erdABt.mount: Deactivated successfully. Mar 7 01:20:25.230874 systemd-networkd[1397]: lxc_health: Link UP Mar 7 01:20:25.305568 systemd-networkd[1397]: lxc_health: Gained carrier Mar 7 01:20:26.238089 systemd-networkd[1397]: lxc91304b228488: Link UP Mar 7 01:20:26.337635 kernel: eth0: renamed from tmpb129e Mar 7 01:20:26.359115 systemd-networkd[1397]: lxc7c66767b219a: Link UP Mar 7 01:20:26.424079 systemd-networkd[1397]: lxc91304b228488: Gained carrier Mar 7 01:20:26.442232 kernel: eth0: renamed from tmp3998d Mar 7 01:20:26.510617 systemd-networkd[1397]: lxc7c66767b219a: Gained carrier Mar 7 01:20:27.137692 kubelet[2740]: E0307 01:20:27.134904 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:27.194719 systemd-networkd[1397]: lxc_health: Gained IPv6LL Mar 7 01:20:27.644550 systemd-networkd[1397]: lxc7c66767b219a: Gained IPv6LL Mar 7 01:20:27.944554 kubelet[2740]: E0307 01:20:27.940792 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:28.020586 systemd-networkd[1397]: lxc91304b228488: Gained IPv6LL Mar 7 01:20:35.798176 sudo[1660]: pam_unix(sudo:session): session closed for user root Mar 7 01:20:35.815571 sshd[1656]: pam_unix(sshd:session): session closed for user core Mar 7 01:20:35.833892 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:58148.service: Deactivated successfully. Mar 7 01:20:35.855667 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 01:20:35.856525 systemd[1]: session-9.scope: Consumed 37.931s CPU time, 165.3M memory peak, 0B memory swap peak. Mar 7 01:20:35.867471 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Mar 7 01:20:35.887053 systemd-logind[1462]: Removed session 9. Mar 7 01:20:46.790692 containerd[1478]: time="2026-03-07T01:20:46.788543754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:46.790692 containerd[1478]: time="2026-03-07T01:20:46.789912994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:46.790692 containerd[1478]: time="2026-03-07T01:20:46.790057774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:46.790692 containerd[1478]: time="2026-03-07T01:20:46.790301499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:46.845148 containerd[1478]: time="2026-03-07T01:20:46.841111519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:20:46.845148 containerd[1478]: time="2026-03-07T01:20:46.841225722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:20:46.845148 containerd[1478]: time="2026-03-07T01:20:46.841243876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:46.845850 containerd[1478]: time="2026-03-07T01:20:46.845698274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:20:46.980385 systemd[1]: Started cri-containerd-b129e11220ececf8a6745a62034c0d42dfc588cb193919c54f5aa7c8378ad523.scope - libcontainer container b129e11220ececf8a6745a62034c0d42dfc588cb193919c54f5aa7c8378ad523. Mar 7 01:20:46.997653 systemd[1]: Started cri-containerd-3998dcadb292be2b3f46e9621a082ab2c018a9babfb16798fd6891ba07d6e6fc.scope - libcontainer container 3998dcadb292be2b3f46e9621a082ab2c018a9babfb16798fd6891ba07d6e6fc. Mar 7 01:20:47.064540 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:20:47.138750 systemd-resolved[1399]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 7 01:20:47.268117 containerd[1478]: time="2026-03-07T01:20:47.267519226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7b98l,Uid:b8e6ec16-db69-4194-9a12-7eef8b7cd856,Namespace:kube-system,Attempt:0,} returns sandbox id \"b129e11220ececf8a6745a62034c0d42dfc588cb193919c54f5aa7c8378ad523\"" Mar 7 01:20:47.294143 kubelet[2740]: E0307 01:20:47.294094 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:47.337502 containerd[1478]: time="2026-03-07T01:20:47.337358465Z" level=info msg="CreateContainer within sandbox \"b129e11220ececf8a6745a62034c0d42dfc588cb193919c54f5aa7c8378ad523\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:20:47.341667 containerd[1478]: time="2026-03-07T01:20:47.341577995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bnmq5,Uid:42d9f658-145a-454a-a7ad-5d9c52ed7336,Namespace:kube-system,Attempt:0,} returns sandbox id \"3998dcadb292be2b3f46e9621a082ab2c018a9babfb16798fd6891ba07d6e6fc\"" Mar 7 01:20:47.369166 kubelet[2740]: E0307 01:20:47.366521 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:47.466558 containerd[1478]: time="2026-03-07T01:20:47.464066737Z" level=info msg="CreateContainer within sandbox \"3998dcadb292be2b3f46e9621a082ab2c018a9babfb16798fd6891ba07d6e6fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 01:20:47.648178 containerd[1478]: time="2026-03-07T01:20:47.647782241Z" level=info msg="CreateContainer within sandbox \"b129e11220ececf8a6745a62034c0d42dfc588cb193919c54f5aa7c8378ad523\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b4491335fa83618d7c66eed3699ab3b0a670873021d84f70608a1e0c7d45d1d\"" Mar 7 01:20:47.651719 containerd[1478]: time="2026-03-07T01:20:47.651502452Z" level=info msg="StartContainer for \"6b4491335fa83618d7c66eed3699ab3b0a670873021d84f70608a1e0c7d45d1d\"" Mar 7 01:20:47.721671 containerd[1478]: time="2026-03-07T01:20:47.721380070Z" level=info msg="CreateContainer within sandbox \"3998dcadb292be2b3f46e9621a082ab2c018a9babfb16798fd6891ba07d6e6fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04e990beb8fc8350a735db497752e58a6bf20060b5ed7e4eb4e2bfeff572e68d\"" Mar 7 01:20:47.727226 containerd[1478]: time="2026-03-07T01:20:47.727177614Z" level=info msg="StartContainer for \"04e990beb8fc8350a735db497752e58a6bf20060b5ed7e4eb4e2bfeff572e68d\"" Mar 7 01:20:47.840443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845447251.mount: Deactivated successfully. Mar 7 01:20:47.884254 systemd[1]: Started cri-containerd-6b4491335fa83618d7c66eed3699ab3b0a670873021d84f70608a1e0c7d45d1d.scope - libcontainer container 6b4491335fa83618d7c66eed3699ab3b0a670873021d84f70608a1e0c7d45d1d. Mar 7 01:20:47.994714 systemd[1]: Started cri-containerd-04e990beb8fc8350a735db497752e58a6bf20060b5ed7e4eb4e2bfeff572e68d.scope - libcontainer container 04e990beb8fc8350a735db497752e58a6bf20060b5ed7e4eb4e2bfeff572e68d. Mar 7 01:20:48.201060 containerd[1478]: time="2026-03-07T01:20:48.200760567Z" level=info msg="StartContainer for \"6b4491335fa83618d7c66eed3699ab3b0a670873021d84f70608a1e0c7d45d1d\" returns successfully" Mar 7 01:20:48.309385 containerd[1478]: time="2026-03-07T01:20:48.308383377Z" level=info msg="StartContainer for \"04e990beb8fc8350a735db497752e58a6bf20060b5ed7e4eb4e2bfeff572e68d\" returns successfully" Mar 7 01:20:48.324352 kubelet[2740]: E0307 01:20:48.324060 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:49.351136 kubelet[2740]: E0307 01:20:49.347226 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:49.360639 kubelet[2740]: E0307 01:20:49.355696 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:49.517348 kubelet[2740]: I0307 01:20:49.515848 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bnmq5" podStartSLOduration=193.515822376 podStartE2EDuration="3m13.515822376s" podCreationTimestamp="2026-03-07 01:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:20:49.50545559 +0000 UTC m=+234.830218296" watchObservedRunningTime="2026-03-07 01:20:49.515822376 +0000 UTC m=+234.840585041" Mar 7 01:20:49.531774 kubelet[2740]: I0307 01:20:49.516276 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7b98l" podStartSLOduration=190.516260472 podStartE2EDuration="3m10.516260472s" podCreationTimestamp="2026-03-07 01:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:20:48.434554798 +0000 UTC m=+233.759317474" watchObservedRunningTime="2026-03-07 01:20:49.516260472 +0000 UTC m=+234.841023158" Mar 7 01:20:50.355582 kubelet[2740]: E0307 01:20:50.354596 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:51.363150 kubelet[2740]: E0307 01:20:51.362502 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:53.793188 kubelet[2740]: E0307 01:20:53.786503 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:20:53.813240 kubelet[2740]: E0307 01:20:53.801675 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:21:01.307332 kubelet[2740]: E0307 01:21:01.306439 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:21:08.787376 kubelet[2740]: E0307 01:21:08.785770 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:21:09.797720 kubelet[2740]: E0307 01:21:09.796440 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:21:12.779287 kubelet[2740]: E0307 01:21:12.778495 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:21:39.797172 kubelet[2740]: E0307 01:21:39.797117 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:01.795077 kubelet[2740]: E0307 01:22:01.793423 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:10.779228 kubelet[2740]: E0307 01:22:10.777864 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:11.902280 kubelet[2740]: E0307 01:22:11.894433 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:13.835051 kubelet[2740]: E0307 01:22:13.822370 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:22.802646 kubelet[2740]: E0307 01:22:22.780377 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:25.786148 kubelet[2740]: E0307 01:22:25.778570 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:26.831691 kubelet[2740]: E0307 01:22:26.808516 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:22:53.812423 kubelet[2740]: E0307 01:22:53.798693 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:06.788363 kubelet[2740]: E0307 01:23:06.781544 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:14.783464 kubelet[2740]: E0307 01:23:14.776614 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:24.792399 kubelet[2740]: E0307 01:23:24.779291 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:25.778444 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:51232.service - OpenSSH per-connection server daemon (10.0.0.1:51232). Mar 7 01:23:26.081323 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 51232 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:23:26.116080 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:23:26.236282 systemd-logind[1462]: New session 10 of user core. Mar 7 01:23:26.317011 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 01:23:27.549279 sshd[4372]: pam_unix(sshd:session): session closed for user core Mar 7 01:23:27.565377 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:51232.service: Deactivated successfully. Mar 7 01:23:27.584212 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 01:23:27.606293 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Mar 7 01:23:27.612188 systemd-logind[1462]: Removed session 10. Mar 7 01:23:32.691403 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:44162.service - OpenSSH per-connection server daemon (10.0.0.1:44162). Mar 7 01:23:32.922140 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 44162 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:23:32.935850 sshd[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:23:32.962259 systemd-logind[1462]: New session 11 of user core. Mar 7 01:23:33.036443 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 01:23:33.830070 sshd[4390]: pam_unix(sshd:session): session closed for user core Mar 7 01:23:33.852774 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:44162.service: Deactivated successfully. Mar 7 01:23:33.861356 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 01:23:33.876463 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Mar 7 01:23:33.896510 systemd-logind[1462]: Removed session 11. Mar 7 01:23:34.803610 kubelet[2740]: E0307 01:23:34.801131 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:40.271289 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:44168.service - OpenSSH per-connection server daemon (10.0.0.1:44168). Mar 7 01:23:40.350433 kubelet[2740]: E0307 01:23:40.350186 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:40.828870 sshd[4406]: Accepted publickey for core from 10.0.0.1 port 44168 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:23:40.833183 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:23:41.116088 systemd-logind[1462]: New session 12 of user core. Mar 7 01:23:41.271542 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 01:23:45.510365 kubelet[2740]: E0307 01:23:45.509579 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:45.962689 sshd[4406]: pam_unix(sshd:session): session closed for user core Mar 7 01:23:46.028820 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:44168.service: Deactivated successfully. Mar 7 01:23:46.053654 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 01:23:46.054377 systemd[1]: session-12.scope: Consumed 1.947s CPU time. Mar 7 01:23:46.062091 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Mar 7 01:23:46.069666 systemd-logind[1462]: Removed session 12. Mar 7 01:23:48.793664 kubelet[2740]: E0307 01:23:48.780184 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:23:51.444573 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:45706.service - OpenSSH per-connection server daemon (10.0.0.1:45706). Mar 7 01:23:51.761773 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 45706 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:23:51.767277 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:23:51.823843 systemd-logind[1462]: New session 13 of user core. Mar 7 01:23:51.871730 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 01:23:52.361179 sshd[4428]: pam_unix(sshd:session): session closed for user core Mar 7 01:23:52.387840 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:45706.service: Deactivated successfully. Mar 7 01:23:52.405482 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 01:23:52.407128 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Mar 7 01:23:52.420254 systemd-logind[1462]: Removed session 13. Mar 7 01:23:57.438631 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:45720.service - OpenSSH per-connection server daemon (10.0.0.1:45720). Mar 7 01:23:57.735857 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 45720 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:23:57.751175 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:23:57.812691 systemd-logind[1462]: New session 14 of user core. Mar 7 01:23:57.824605 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 01:23:58.510770 sshd[4445]: pam_unix(sshd:session): session closed for user core Mar 7 01:23:58.546401 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:45720.service: Deactivated successfully. Mar 7 01:23:58.571103 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 01:23:58.604382 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Mar 7 01:23:58.621759 systemd-logind[1462]: Removed session 14. Mar 7 01:24:02.795801 kubelet[2740]: E0307 01:24:02.795649 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:03.601756 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Mar 7 01:24:03.818392 sshd[4462]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:03.831480 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:03.883104 systemd-logind[1462]: New session 15 of user core. Mar 7 01:24:03.903545 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 01:24:04.603451 sshd[4462]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:04.879627 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:55174.service: Deactivated successfully. Mar 7 01:24:05.223785 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 01:24:05.235531 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Mar 7 01:24:05.245284 systemd-logind[1462]: Removed session 15. Mar 7 01:24:09.695038 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:55188.service - OpenSSH per-connection server daemon (10.0.0.1:55188). Mar 7 01:24:09.928794 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 55188 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:09.934331 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:09.992883 systemd-logind[1462]: New session 16 of user core. Mar 7 01:24:10.009023 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 01:24:10.940676 sshd[4478]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:10.954775 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:55188.service: Deactivated successfully. Mar 7 01:24:10.966283 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 01:24:11.003669 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Mar 7 01:24:11.010256 systemd-logind[1462]: Removed session 16. Mar 7 01:24:15.798656 kubelet[2740]: E0307 01:24:15.796519 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:16.036808 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:49170.service - OpenSSH per-connection server daemon (10.0.0.1:49170). Mar 7 01:24:16.260337 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 49170 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:16.319334 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:16.376787 systemd-logind[1462]: New session 17 of user core. Mar 7 01:24:16.427454 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 01:24:17.225636 sshd[4494]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:17.246624 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:49170.service: Deactivated successfully. Mar 7 01:24:17.262486 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 01:24:17.273236 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Mar 7 01:24:17.303078 systemd-logind[1462]: Removed session 17. Mar 7 01:24:22.369216 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:33682.service - OpenSSH per-connection server daemon (10.0.0.1:33682). Mar 7 01:24:22.458345 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Mar 7 01:24:22.544770 sshd[4509]: Accepted publickey for core from 10.0.0.1 port 33682 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:22.548846 sshd[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:22.570564 systemd-logind[1462]: New session 18 of user core. Mar 7 01:24:22.589765 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 01:24:22.615279 systemd-tmpfiles[4510]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 01:24:22.621680 systemd-tmpfiles[4510]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 01:24:22.630171 systemd-tmpfiles[4510]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 01:24:22.630736 systemd-tmpfiles[4510]: ACLs are not supported, ignoring. Mar 7 01:24:22.630841 systemd-tmpfiles[4510]: ACLs are not supported, ignoring. Mar 7 01:24:22.643162 systemd-tmpfiles[4510]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 01:24:22.643177 systemd-tmpfiles[4510]: Skipping /boot Mar 7 01:24:22.689653 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Mar 7 01:24:22.698419 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Mar 7 01:24:23.360750 sshd[4509]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:23.382719 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:33682.service: Deactivated successfully. Mar 7 01:24:23.418096 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 01:24:23.420773 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Mar 7 01:24:23.427262 systemd-logind[1462]: Removed session 18. Mar 7 01:24:26.794188 kubelet[2740]: E0307 01:24:26.776341 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:28.438990 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:33694.service - OpenSSH per-connection server daemon (10.0.0.1:33694). Mar 7 01:24:28.715310 sshd[4528]: Accepted publickey for core from 10.0.0.1 port 33694 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:28.717656 sshd[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:28.787715 systemd-logind[1462]: New session 19 of user core. Mar 7 01:24:28.817691 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 01:24:29.531381 sshd[4528]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:29.566788 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:33694.service: Deactivated successfully. Mar 7 01:24:29.585519 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 01:24:29.599080 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Mar 7 01:24:29.602820 systemd-logind[1462]: Removed session 19. Mar 7 01:24:34.558537 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:34040.service - OpenSSH per-connection server daemon (10.0.0.1:34040). Mar 7 01:24:34.725613 sshd[4544]: Accepted publickey for core from 10.0.0.1 port 34040 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:34.745746 sshd[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:34.795184 systemd-logind[1462]: New session 20 of user core. Mar 7 01:24:34.807401 kubelet[2740]: E0307 01:24:34.805225 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:34.842579 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 01:24:35.342085 sshd[4544]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:35.373590 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:34040.service: Deactivated successfully. Mar 7 01:24:35.382628 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 01:24:35.390888 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Mar 7 01:24:35.403838 systemd-logind[1462]: Removed session 20. Mar 7 01:24:40.416402 systemd[1]: Started sshd@20-10.0.0.31:22-10.0.0.1:58708.service - OpenSSH per-connection server daemon (10.0.0.1:58708). Mar 7 01:24:40.610385 sshd[4561]: Accepted publickey for core from 10.0.0.1 port 58708 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:40.615624 sshd[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:40.647461 systemd-logind[1462]: New session 21 of user core. Mar 7 01:24:40.659111 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 01:24:41.320230 sshd[4561]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:41.392459 systemd[1]: sshd@20-10.0.0.31:22-10.0.0.1:58708.service: Deactivated successfully. Mar 7 01:24:41.456631 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 01:24:41.513883 systemd-logind[1462]: Session 21 logged out. Waiting for processes to exit. Mar 7 01:24:41.527516 systemd-logind[1462]: Removed session 21. Mar 7 01:24:41.802745 kubelet[2740]: E0307 01:24:41.798646 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:46.380822 systemd[1]: Started sshd@21-10.0.0.31:22-10.0.0.1:58720.service - OpenSSH per-connection server daemon (10.0.0.1:58720). Mar 7 01:24:46.551788 sshd[4577]: Accepted publickey for core from 10.0.0.1 port 58720 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:46.560138 sshd[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:46.608987 systemd-logind[1462]: New session 22 of user core. Mar 7 01:24:46.638172 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 01:24:47.704838 sshd[4577]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:47.718825 systemd[1]: sshd@21-10.0.0.31:22-10.0.0.1:58720.service: Deactivated successfully. Mar 7 01:24:47.722174 systemd-logind[1462]: Session 22 logged out. Waiting for processes to exit. Mar 7 01:24:47.726451 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 01:24:47.732237 systemd-logind[1462]: Removed session 22. Mar 7 01:24:48.795853 kubelet[2740]: E0307 01:24:48.790530 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:52.759057 systemd[1]: Started sshd@22-10.0.0.31:22-10.0.0.1:40604.service - OpenSSH per-connection server daemon (10.0.0.1:40604). Mar 7 01:24:53.012854 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 40604 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:53.018157 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:53.106836 systemd-logind[1462]: New session 23 of user core. Mar 7 01:24:53.131872 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 01:24:53.774459 sshd[4592]: pam_unix(sshd:session): session closed for user core Mar 7 01:24:53.807741 systemd[1]: sshd@22-10.0.0.31:22-10.0.0.1:40604.service: Deactivated successfully. Mar 7 01:24:53.824616 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 01:24:53.848485 systemd-logind[1462]: Session 23 logged out. Waiting for processes to exit. Mar 7 01:24:53.862661 systemd-logind[1462]: Removed session 23. Mar 7 01:24:56.777689 kubelet[2740]: E0307 01:24:56.775388 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:24:58.933742 systemd[1]: Started sshd@23-10.0.0.31:22-10.0.0.1:40630.service - OpenSSH per-connection server daemon (10.0.0.1:40630). Mar 7 01:24:59.183756 sshd[4609]: Accepted publickey for core from 10.0.0.1 port 40630 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:24:59.214374 sshd[4609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:24:59.261080 systemd-logind[1462]: New session 24 of user core. Mar 7 01:24:59.299132 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 01:25:00.059883 sshd[4609]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:00.097772 systemd[1]: sshd@23-10.0.0.31:22-10.0.0.1:40630.service: Deactivated successfully. Mar 7 01:25:00.123406 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 01:25:00.141845 systemd-logind[1462]: Session 24 logged out. Waiting for processes to exit. Mar 7 01:25:00.160156 systemd-logind[1462]: Removed session 24. Mar 7 01:25:05.212721 systemd[1]: Started sshd@24-10.0.0.31:22-10.0.0.1:48016.service - OpenSSH per-connection server daemon (10.0.0.1:48016). Mar 7 01:25:05.298400 sshd[4627]: Accepted publickey for core from 10.0.0.1 port 48016 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:05.296767 sshd[4627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:05.331645 systemd-logind[1462]: New session 25 of user core. Mar 7 01:25:05.351785 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 7 01:25:05.780140 kubelet[2740]: E0307 01:25:05.778899 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:05.884758 sshd[4627]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:05.907821 systemd[1]: sshd@24-10.0.0.31:22-10.0.0.1:48016.service: Deactivated successfully. Mar 7 01:25:05.908718 systemd-logind[1462]: Session 25 logged out. Waiting for processes to exit. Mar 7 01:25:05.917320 systemd[1]: session-25.scope: Deactivated successfully. Mar 7 01:25:05.930676 systemd-logind[1462]: Removed session 25. Mar 7 01:25:10.951852 systemd[1]: Started sshd@25-10.0.0.31:22-10.0.0.1:55732.service - OpenSSH per-connection server daemon (10.0.0.1:55732). Mar 7 01:25:11.268420 sshd[4643]: Accepted publickey for core from 10.0.0.1 port 55732 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:11.287570 sshd[4643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:11.346383 systemd-logind[1462]: New session 26 of user core. Mar 7 01:25:11.422595 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 7 01:25:12.139881 sshd[4643]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:12.164449 systemd[1]: sshd@25-10.0.0.31:22-10.0.0.1:55732.service: Deactivated successfully. Mar 7 01:25:12.170503 systemd[1]: session-26.scope: Deactivated successfully. Mar 7 01:25:12.198707 systemd-logind[1462]: Session 26 logged out. Waiting for processes to exit. Mar 7 01:25:12.217712 systemd-logind[1462]: Removed session 26. Mar 7 01:25:17.194568 systemd[1]: Started sshd@26-10.0.0.31:22-10.0.0.1:55750.service - OpenSSH per-connection server daemon (10.0.0.1:55750). Mar 7 01:25:17.321366 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 55750 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:17.327490 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:17.366736 systemd-logind[1462]: New session 27 of user core. Mar 7 01:25:17.393321 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 7 01:25:17.800044 kubelet[2740]: E0307 01:25:17.794916 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:17.816185 kubelet[2740]: E0307 01:25:17.803779 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:18.003349 sshd[4659]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:18.031854 systemd[1]: sshd@26-10.0.0.31:22-10.0.0.1:55750.service: Deactivated successfully. Mar 7 01:25:18.050266 systemd[1]: session-27.scope: Deactivated successfully. Mar 7 01:25:18.061304 systemd-logind[1462]: Session 27 logged out. Waiting for processes to exit. Mar 7 01:25:18.063415 systemd-logind[1462]: Removed session 27. Mar 7 01:25:23.095896 systemd[1]: Started sshd@27-10.0.0.31:22-10.0.0.1:53880.service - OpenSSH per-connection server daemon (10.0.0.1:53880). Mar 7 01:25:23.290243 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 53880 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:23.298545 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:23.347560 systemd-logind[1462]: New session 28 of user core. Mar 7 01:25:23.395845 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 7 01:25:24.432088 sshd[4674]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:24.453450 systemd[1]: sshd@27-10.0.0.31:22-10.0.0.1:53880.service: Deactivated successfully. Mar 7 01:25:24.469863 systemd[1]: session-28.scope: Deactivated successfully. Mar 7 01:25:24.515515 systemd-logind[1462]: Session 28 logged out. Waiting for processes to exit. Mar 7 01:25:24.533483 systemd-logind[1462]: Removed session 28. Mar 7 01:25:29.505179 systemd[1]: Started sshd@28-10.0.0.31:22-10.0.0.1:53900.service - OpenSSH per-connection server daemon (10.0.0.1:53900). Mar 7 01:25:29.716518 sshd[4690]: Accepted publickey for core from 10.0.0.1 port 53900 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:29.729620 sshd[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:29.768868 systemd-logind[1462]: New session 29 of user core. Mar 7 01:25:29.792417 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 7 01:25:30.232254 sshd[4690]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:30.248815 systemd[1]: sshd@28-10.0.0.31:22-10.0.0.1:53900.service: Deactivated successfully. Mar 7 01:25:30.262455 systemd[1]: session-29.scope: Deactivated successfully. Mar 7 01:25:30.272064 systemd-logind[1462]: Session 29 logged out. Waiting for processes to exit. Mar 7 01:25:30.277230 systemd-logind[1462]: Removed session 29. Mar 7 01:25:35.349344 systemd[1]: Started sshd@29-10.0.0.31:22-10.0.0.1:55636.service - OpenSSH per-connection server daemon (10.0.0.1:55636). Mar 7 01:25:35.556695 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 55636 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:35.594268 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:35.639513 systemd-logind[1462]: New session 30 of user core. Mar 7 01:25:35.659325 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 7 01:25:36.019146 sshd[4705]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:36.033105 systemd[1]: sshd@29-10.0.0.31:22-10.0.0.1:55636.service: Deactivated successfully. Mar 7 01:25:36.053439 systemd[1]: session-30.scope: Deactivated successfully. Mar 7 01:25:36.072770 systemd-logind[1462]: Session 30 logged out. Waiting for processes to exit. Mar 7 01:25:36.089312 systemd-logind[1462]: Removed session 30. Mar 7 01:25:36.779674 kubelet[2740]: E0307 01:25:36.779533 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:38.790108 kubelet[2740]: E0307 01:25:38.784671 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:25:41.099438 systemd[1]: Started sshd@30-10.0.0.31:22-10.0.0.1:50304.service - OpenSSH per-connection server daemon (10.0.0.1:50304). Mar 7 01:25:41.350258 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 50304 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:41.368149 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:41.414174 systemd-logind[1462]: New session 31 of user core. Mar 7 01:25:41.492698 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 7 01:25:42.618482 sshd[4722]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:42.632749 systemd[1]: sshd@30-10.0.0.31:22-10.0.0.1:50304.service: Deactivated successfully. Mar 7 01:25:42.651236 systemd[1]: session-31.scope: Deactivated successfully. Mar 7 01:25:42.668983 systemd-logind[1462]: Session 31 logged out. Waiting for processes to exit. Mar 7 01:25:42.692324 systemd-logind[1462]: Removed session 31. Mar 7 01:25:47.659509 systemd[1]: Started sshd@31-10.0.0.31:22-10.0.0.1:50320.service - OpenSSH per-connection server daemon (10.0.0.1:50320). Mar 7 01:25:47.787731 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 50320 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:47.791902 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:47.818045 systemd-logind[1462]: New session 32 of user core. Mar 7 01:25:47.844894 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 7 01:25:48.179573 sshd[4741]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:48.197173 systemd[1]: sshd@31-10.0.0.31:22-10.0.0.1:50320.service: Deactivated successfully. Mar 7 01:25:48.208664 systemd[1]: session-32.scope: Deactivated successfully. Mar 7 01:25:48.218505 systemd-logind[1462]: Session 32 logged out. Waiting for processes to exit. Mar 7 01:25:48.224576 systemd-logind[1462]: Removed session 32. Mar 7 01:25:53.230536 systemd[1]: Started sshd@32-10.0.0.31:22-10.0.0.1:35718.service - OpenSSH per-connection server daemon (10.0.0.1:35718). Mar 7 01:25:53.361405 sshd[4757]: Accepted publickey for core from 10.0.0.1 port 35718 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:53.378161 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:53.403299 systemd-logind[1462]: New session 33 of user core. Mar 7 01:25:53.419444 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 7 01:25:53.834828 sshd[4757]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:53.862217 systemd[1]: sshd@32-10.0.0.31:22-10.0.0.1:35718.service: Deactivated successfully. Mar 7 01:25:53.870815 systemd[1]: session-33.scope: Deactivated successfully. Mar 7 01:25:53.883600 systemd-logind[1462]: Session 33 logged out. Waiting for processes to exit. Mar 7 01:25:53.906432 systemd[1]: Started sshd@33-10.0.0.31:22-10.0.0.1:35722.service - OpenSSH per-connection server daemon (10.0.0.1:35722). Mar 7 01:25:53.908445 systemd-logind[1462]: Removed session 33. Mar 7 01:25:53.977539 sshd[4772]: Accepted publickey for core from 10.0.0.1 port 35722 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:53.977263 sshd[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:54.004466 systemd-logind[1462]: New session 34 of user core. Mar 7 01:25:54.014426 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 7 01:25:54.461532 sshd[4772]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:54.488649 systemd[1]: sshd@33-10.0.0.31:22-10.0.0.1:35722.service: Deactivated successfully. Mar 7 01:25:54.499187 systemd[1]: session-34.scope: Deactivated successfully. Mar 7 01:25:54.505396 systemd-logind[1462]: Session 34 logged out. Waiting for processes to exit. Mar 7 01:25:54.523527 systemd[1]: Started sshd@34-10.0.0.31:22-10.0.0.1:35734.service - OpenSSH per-connection server daemon (10.0.0.1:35734). Mar 7 01:25:54.529071 systemd-logind[1462]: Removed session 34. Mar 7 01:25:54.681792 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 35734 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:25:54.690388 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:25:54.712064 systemd-logind[1462]: New session 35 of user core. Mar 7 01:25:54.727665 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 7 01:25:55.041351 sshd[4784]: pam_unix(sshd:session): session closed for user core Mar 7 01:25:55.054839 systemd[1]: sshd@34-10.0.0.31:22-10.0.0.1:35734.service: Deactivated successfully. Mar 7 01:25:55.059197 systemd[1]: session-35.scope: Deactivated successfully. Mar 7 01:25:55.065908 systemd-logind[1462]: Session 35 logged out. Waiting for processes to exit. Mar 7 01:25:55.085236 systemd-logind[1462]: Removed session 35. Mar 7 01:25:57.778889 kubelet[2740]: E0307 01:25:57.778093 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:00.135469 systemd[1]: Started sshd@35-10.0.0.31:22-10.0.0.1:48414.service - OpenSSH per-connection server daemon (10.0.0.1:48414). Mar 7 01:26:00.371094 sshd[4798]: Accepted publickey for core from 10.0.0.1 port 48414 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:00.394683 sshd[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:00.451282 systemd-logind[1462]: New session 36 of user core. Mar 7 01:26:00.472535 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 7 01:26:01.345515 sshd[4798]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:01.376585 systemd[1]: sshd@35-10.0.0.31:22-10.0.0.1:48414.service: Deactivated successfully. Mar 7 01:26:01.391155 systemd[1]: session-36.scope: Deactivated successfully. Mar 7 01:26:01.413196 systemd-logind[1462]: Session 36 logged out. Waiting for processes to exit. Mar 7 01:26:01.456419 systemd-logind[1462]: Removed session 36. Mar 7 01:26:07.023583 systemd[1]: Started sshd@36-10.0.0.31:22-10.0.0.1:48440.service - OpenSSH per-connection server daemon (10.0.0.1:48440). Mar 7 01:26:07.245877 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 48440 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:07.253261 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:07.314239 systemd-logind[1462]: New session 37 of user core. Mar 7 01:26:07.468014 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 7 01:26:08.233271 sshd[4817]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:08.260219 systemd[1]: sshd@36-10.0.0.31:22-10.0.0.1:48440.service: Deactivated successfully. Mar 7 01:26:08.270340 systemd[1]: session-37.scope: Deactivated successfully. Mar 7 01:26:08.303548 systemd-logind[1462]: Session 37 logged out. Waiting for processes to exit. Mar 7 01:26:08.314403 systemd-logind[1462]: Removed session 37. Mar 7 01:26:11.833110 kubelet[2740]: E0307 01:26:11.831686 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:12.928115 kubelet[2740]: E0307 01:26:12.927351 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:13.665145 systemd[1]: Started sshd@37-10.0.0.31:22-10.0.0.1:60280.service - OpenSSH per-connection server daemon (10.0.0.1:60280). Mar 7 01:26:14.013301 sshd[4832]: Accepted publickey for core from 10.0.0.1 port 60280 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:14.017013 sshd[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:14.075381 systemd-logind[1462]: New session 38 of user core. Mar 7 01:26:14.096849 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 7 01:26:14.944791 sshd[4832]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:14.972014 systemd[1]: sshd@37-10.0.0.31:22-10.0.0.1:60280.service: Deactivated successfully. Mar 7 01:26:15.010778 systemd[1]: session-38.scope: Deactivated successfully. Mar 7 01:26:15.026723 systemd-logind[1462]: Session 38 logged out. Waiting for processes to exit. Mar 7 01:26:15.043568 systemd-logind[1462]: Removed session 38. Mar 7 01:26:19.966883 systemd[1]: Started sshd@38-10.0.0.31:22-10.0.0.1:60290.service - OpenSSH per-connection server daemon (10.0.0.1:60290). Mar 7 01:26:20.278848 sshd[4847]: Accepted publickey for core from 10.0.0.1 port 60290 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:20.316497 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:20.343000 systemd-logind[1462]: New session 39 of user core. Mar 7 01:26:20.384889 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 7 01:26:21.204379 sshd[4847]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:21.230510 systemd[1]: sshd@38-10.0.0.31:22-10.0.0.1:60290.service: Deactivated successfully. Mar 7 01:26:21.246730 systemd[1]: session-39.scope: Deactivated successfully. Mar 7 01:26:21.279699 systemd-logind[1462]: Session 39 logged out. Waiting for processes to exit. Mar 7 01:26:21.303885 systemd-logind[1462]: Removed session 39. Mar 7 01:26:21.810191 kubelet[2740]: E0307 01:26:21.807246 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:39.354207 systemd[1]: Started sshd@39-10.0.0.31:22-10.0.0.1:53708.service - OpenSSH per-connection server daemon (10.0.0.1:53708). Mar 7 01:26:39.519452 systemd[1]: cri-containerd-a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f.scope: Deactivated successfully. Mar 7 01:26:39.520107 systemd[1]: cri-containerd-a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f.scope: Consumed 59.327s CPU time, 20.0M memory peak, 0B memory swap peak. Mar 7 01:26:39.923394 kubelet[2740]: E0307 01:26:39.916469 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.566s" Mar 7 01:26:40.018377 kubelet[2740]: E0307 01:26:40.018330 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:40.222273 sshd[4863]: Accepted publickey for core from 10.0.0.1 port 53708 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:40.232541 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:40.349494 systemd-logind[1462]: New session 40 of user core. Mar 7 01:26:40.360868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f-rootfs.mount: Deactivated successfully. Mar 7 01:26:40.419468 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 7 01:26:40.513802 containerd[1478]: time="2026-03-07T01:26:40.512999609Z" level=info msg="shim disconnected" id=a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f namespace=k8s.io Mar 7 01:26:40.513802 containerd[1478]: time="2026-03-07T01:26:40.513442919Z" level=warning msg="cleaning up after shim disconnected" id=a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f namespace=k8s.io Mar 7 01:26:40.513802 containerd[1478]: time="2026-03-07T01:26:40.513463888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:26:40.947702 kubelet[2740]: I0307 01:26:40.942869 2740 scope.go:117] "RemoveContainer" containerID="a860a19954e78171e07733e52b8bbc141578f40332481fed03e5012a1f45328f" Mar 7 01:26:40.947702 kubelet[2740]: E0307 01:26:40.943069 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:41.004030 containerd[1478]: time="2026-03-07T01:26:41.003751997Z" level=info msg="CreateContainer within sandbox \"7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 01:26:41.282657 containerd[1478]: time="2026-03-07T01:26:41.280488471Z" level=info msg="CreateContainer within sandbox \"7f433e06b96c260756587b7e2ef17cb73d45c809749e8496bed201e14ec3d04e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e016d1090f9ff9771920ebf177935cb8194dcc6bfd750a03984883b667d75f14\"" Mar 7 01:26:41.322498 containerd[1478]: time="2026-03-07T01:26:41.286419723Z" level=info msg="StartContainer for \"e016d1090f9ff9771920ebf177935cb8194dcc6bfd750a03984883b667d75f14\"" Mar 7 01:26:41.942673 systemd[1]: run-containerd-runc-k8s.io-e016d1090f9ff9771920ebf177935cb8194dcc6bfd750a03984883b667d75f14-runc.jBTPun.mount: Deactivated successfully. Mar 7 01:26:42.045412 sshd[4863]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:42.062218 systemd[1]: Started cri-containerd-e016d1090f9ff9771920ebf177935cb8194dcc6bfd750a03984883b667d75f14.scope - libcontainer container e016d1090f9ff9771920ebf177935cb8194dcc6bfd750a03984883b667d75f14. Mar 7 01:26:42.078907 systemd[1]: sshd@39-10.0.0.31:22-10.0.0.1:53708.service: Deactivated successfully. Mar 7 01:26:42.112405 systemd[1]: session-40.scope: Deactivated successfully. Mar 7 01:26:42.126343 systemd-logind[1462]: Session 40 logged out. Waiting for processes to exit. Mar 7 01:26:42.137677 systemd-logind[1462]: Removed session 40. Mar 7 01:26:42.752997 containerd[1478]: time="2026-03-07T01:26:42.752848602Z" level=info msg="StartContainer for \"e016d1090f9ff9771920ebf177935cb8194dcc6bfd750a03984883b667d75f14\" returns successfully" Mar 7 01:26:43.039259 kubelet[2740]: E0307 01:26:43.038726 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:43.468895 systemd[1]: cri-containerd-062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4.scope: Deactivated successfully. Mar 7 01:26:43.506907 systemd[1]: cri-containerd-062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4.scope: Consumed 32.590s CPU time, 22.0M memory peak, 0B memory swap peak. Mar 7 01:26:43.969401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4-rootfs.mount: Deactivated successfully. Mar 7 01:26:44.103358 containerd[1478]: time="2026-03-07T01:26:44.099273529Z" level=info msg="shim disconnected" id=062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4 namespace=k8s.io Mar 7 01:26:44.103358 containerd[1478]: time="2026-03-07T01:26:44.099354329Z" level=warning msg="cleaning up after shim disconnected" id=062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4 namespace=k8s.io Mar 7 01:26:44.103358 containerd[1478]: time="2026-03-07T01:26:44.099369698Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:26:45.022377 kubelet[2740]: E0307 01:26:45.010394 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:45.101000 kubelet[2740]: I0307 01:26:45.099854 2740 scope.go:117] "RemoveContainer" containerID="062afa22810b441b6e2cb124d875dad68e794f6285a955f9a7690618719bd4f4" Mar 7 01:26:45.101000 kubelet[2740]: E0307 01:26:45.100132 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:45.164196 containerd[1478]: time="2026-03-07T01:26:45.137410382Z" level=info msg="CreateContainer within sandbox \"4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 01:26:45.415105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873349164.mount: Deactivated successfully. Mar 7 01:26:45.452651 containerd[1478]: time="2026-03-07T01:26:45.452529519Z" level=info msg="CreateContainer within sandbox \"4ec0a69da46ebf1633da29001e545386c7011ca6344612162343e44aeedf3bff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"adf17800c74ab2a5fca6a675f336ae1f8b505baa3609b5bcafb652cd058bcdfc\"" Mar 7 01:26:45.461010 containerd[1478]: time="2026-03-07T01:26:45.455142649Z" level=info msg="StartContainer for \"adf17800c74ab2a5fca6a675f336ae1f8b505baa3609b5bcafb652cd058bcdfc\"" Mar 7 01:26:45.983632 systemd[1]: Started cri-containerd-adf17800c74ab2a5fca6a675f336ae1f8b505baa3609b5bcafb652cd058bcdfc.scope - libcontainer container adf17800c74ab2a5fca6a675f336ae1f8b505baa3609b5bcafb652cd058bcdfc. Mar 7 01:26:46.570655 containerd[1478]: time="2026-03-07T01:26:46.560104805Z" level=info msg="StartContainer for \"adf17800c74ab2a5fca6a675f336ae1f8b505baa3609b5bcafb652cd058bcdfc\" returns successfully" Mar 7 01:26:46.775427 kubelet[2740]: E0307 01:26:46.775323 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:46.799704 kubelet[2740]: E0307 01:26:46.777180 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:47.263201 systemd[1]: Started sshd@40-10.0.0.31:22-10.0.0.1:37580.service - OpenSSH per-connection server daemon (10.0.0.1:37580). Mar 7 01:26:47.317230 kubelet[2740]: E0307 01:26:47.316047 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:48.351214 kubelet[2740]: E0307 01:26:48.351175 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:48.406028 sshd[5007]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:48.410541 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:48.487373 systemd-logind[1462]: New session 41 of user core. Mar 7 01:26:48.536353 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 7 01:26:49.333428 kubelet[2740]: E0307 01:26:49.333344 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:49.613857 sshd[5007]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:49.622297 systemd[1]: sshd@40-10.0.0.31:22-10.0.0.1:37580.service: Deactivated successfully. Mar 7 01:26:49.627494 systemd[1]: session-41.scope: Deactivated successfully. Mar 7 01:26:49.631499 systemd-logind[1462]: Session 41 logged out. Waiting for processes to exit. Mar 7 01:26:49.634575 systemd-logind[1462]: Removed session 41. Mar 7 01:26:54.696116 systemd[1]: Started sshd@41-10.0.0.31:22-10.0.0.1:39406.service - OpenSSH per-connection server daemon (10.0.0.1:39406). Mar 7 01:26:54.892737 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 39406 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:26:54.902203 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:26:54.939583 systemd-logind[1462]: New session 42 of user core. Mar 7 01:26:54.956558 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 7 01:26:55.079851 kubelet[2740]: E0307 01:26:55.062746 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:55.647694 sshd[5027]: pam_unix(sshd:session): session closed for user core Mar 7 01:26:55.655696 systemd[1]: sshd@41-10.0.0.31:22-10.0.0.1:39406.service: Deactivated successfully. Mar 7 01:26:55.659075 systemd[1]: session-42.scope: Deactivated successfully. Mar 7 01:26:55.662323 systemd-logind[1462]: Session 42 logged out. Waiting for processes to exit. Mar 7 01:26:55.668809 systemd-logind[1462]: Removed session 42. Mar 7 01:26:57.075454 kubelet[2740]: E0307 01:26:57.064365 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:57.460424 kubelet[2740]: E0307 01:26:57.453825 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:58.469339 kubelet[2740]: E0307 01:26:58.468453 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:26:58.799083 kubelet[2740]: E0307 01:26:58.786453 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:00.761166 systemd[1]: Started sshd@42-10.0.0.31:22-10.0.0.1:58192.service - OpenSSH per-connection server daemon (10.0.0.1:58192). Mar 7 01:27:00.979716 sshd[5041]: Accepted publickey for core from 10.0.0.1 port 58192 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:00.988716 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:01.032108 systemd-logind[1462]: New session 43 of user core. Mar 7 01:27:01.047038 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 7 01:27:01.758778 sshd[5041]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:01.851453 systemd[1]: sshd@42-10.0.0.31:22-10.0.0.1:58192.service: Deactivated successfully. Mar 7 01:27:01.868840 systemd[1]: session-43.scope: Deactivated successfully. Mar 7 01:27:01.898732 systemd-logind[1462]: Session 43 logged out. Waiting for processes to exit. Mar 7 01:27:01.914410 systemd-logind[1462]: Removed session 43. Mar 7 01:27:07.200694 systemd[1]: Started sshd@43-10.0.0.31:22-10.0.0.1:58198.service - OpenSSH per-connection server daemon (10.0.0.1:58198). Mar 7 01:27:07.667906 sshd[5057]: Accepted publickey for core from 10.0.0.1 port 58198 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:07.712068 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:07.819040 systemd-logind[1462]: New session 44 of user core. Mar 7 01:27:07.842621 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 7 01:27:08.822498 sshd[5057]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:08.862826 systemd[1]: sshd@43-10.0.0.31:22-10.0.0.1:58198.service: Deactivated successfully. Mar 7 01:27:08.882587 systemd[1]: session-44.scope: Deactivated successfully. Mar 7 01:27:08.902888 systemd-logind[1462]: Session 44 logged out. Waiting for processes to exit. Mar 7 01:27:08.918381 systemd-logind[1462]: Removed session 44. Mar 7 01:27:13.955344 systemd[1]: Started sshd@44-10.0.0.31:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Mar 7 01:27:14.393106 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:14.398018 sshd[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:14.461100 systemd-logind[1462]: New session 45 of user core. Mar 7 01:27:14.485293 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 7 01:27:15.619453 sshd[5072]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:15.652552 systemd[1]: sshd@44-10.0.0.31:22-10.0.0.1:55502.service: Deactivated successfully. Mar 7 01:27:15.663646 systemd[1]: session-45.scope: Deactivated successfully. Mar 7 01:27:15.681124 systemd-logind[1462]: Session 45 logged out. Waiting for processes to exit. Mar 7 01:27:15.683475 systemd-logind[1462]: Removed session 45. Mar 7 01:27:20.687456 systemd[1]: Started sshd@45-10.0.0.31:22-10.0.0.1:57914.service - OpenSSH per-connection server daemon (10.0.0.1:57914). Mar 7 01:27:21.007051 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 57914 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:20.991752 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:21.048781 systemd-logind[1462]: New session 46 of user core. Mar 7 01:27:21.096795 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 7 01:27:21.918049 sshd[5087]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:21.946223 systemd[1]: sshd@45-10.0.0.31:22-10.0.0.1:57914.service: Deactivated successfully. Mar 7 01:27:21.970475 systemd[1]: session-46.scope: Deactivated successfully. Mar 7 01:27:21.984559 systemd-logind[1462]: Session 46 logged out. Waiting for processes to exit. Mar 7 01:27:21.997169 systemd-logind[1462]: Removed session 46. Mar 7 01:27:22.788568 kubelet[2740]: E0307 01:27:22.787247 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:26.967813 kubelet[2740]: E0307 01:27:26.954375 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.177s" Mar 7 01:27:27.094814 systemd[1]: Started sshd@46-10.0.0.31:22-10.0.0.1:57934.service - OpenSSH per-connection server daemon (10.0.0.1:57934). Mar 7 01:27:27.372559 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 57934 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:27.395508 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:27.440366 systemd-logind[1462]: New session 47 of user core. Mar 7 01:27:27.460773 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 7 01:27:28.130492 sshd[5102]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:28.146900 systemd[1]: sshd@46-10.0.0.31:22-10.0.0.1:57934.service: Deactivated successfully. Mar 7 01:27:28.162878 systemd[1]: session-47.scope: Deactivated successfully. Mar 7 01:27:28.183094 systemd-logind[1462]: Session 47 logged out. Waiting for processes to exit. Mar 7 01:27:28.192887 systemd-logind[1462]: Removed session 47. Mar 7 01:27:31.778604 kubelet[2740]: E0307 01:27:31.777587 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:33.234025 systemd[1]: Started sshd@47-10.0.0.31:22-10.0.0.1:33990.service - OpenSSH per-connection server daemon (10.0.0.1:33990). Mar 7 01:27:33.440400 sshd[5117]: Accepted publickey for core from 10.0.0.1 port 33990 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:33.448638 sshd[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:33.506228 systemd-logind[1462]: New session 48 of user core. Mar 7 01:27:33.542001 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 7 01:27:34.495250 sshd[5117]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:34.531816 systemd[1]: sshd@47-10.0.0.31:22-10.0.0.1:33990.service: Deactivated successfully. Mar 7 01:27:34.554287 systemd[1]: session-48.scope: Deactivated successfully. Mar 7 01:27:34.565728 systemd-logind[1462]: Session 48 logged out. Waiting for processes to exit. Mar 7 01:27:34.618048 systemd-logind[1462]: Removed session 48. Mar 7 01:27:39.708800 systemd[1]: Started sshd@48-10.0.0.31:22-10.0.0.1:34010.service - OpenSSH per-connection server daemon (10.0.0.1:34010). Mar 7 01:27:39.926680 sshd[5133]: Accepted publickey for core from 10.0.0.1 port 34010 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:39.965031 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:40.050746 systemd-logind[1462]: New session 49 of user core. Mar 7 01:27:40.135649 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 7 01:27:40.771218 sshd[5133]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:40.799774 systemd[1]: sshd@48-10.0.0.31:22-10.0.0.1:34010.service: Deactivated successfully. Mar 7 01:27:40.814218 systemd[1]: session-49.scope: Deactivated successfully. Mar 7 01:27:40.840897 systemd-logind[1462]: Session 49 logged out. Waiting for processes to exit. Mar 7 01:27:40.850664 systemd-logind[1462]: Removed session 49. Mar 7 01:27:42.796633 kubelet[2740]: E0307 01:27:42.794169 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:27:45.995883 systemd[1]: Started sshd@49-10.0.0.31:22-10.0.0.1:57040.service - OpenSSH per-connection server daemon (10.0.0.1:57040). Mar 7 01:27:46.466872 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 57040 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:46.520767 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:46.709071 systemd-logind[1462]: New session 50 of user core. Mar 7 01:27:46.795304 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 7 01:27:48.039031 sshd[5149]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:48.101430 systemd[1]: sshd@49-10.0.0.31:22-10.0.0.1:57040.service: Deactivated successfully. Mar 7 01:27:48.127291 systemd[1]: session-50.scope: Deactivated successfully. Mar 7 01:27:48.157232 systemd-logind[1462]: Session 50 logged out. Waiting for processes to exit. Mar 7 01:27:48.178332 systemd-logind[1462]: Removed session 50. Mar 7 01:27:53.194607 systemd[1]: Started sshd@50-10.0.0.31:22-10.0.0.1:36798.service - OpenSSH per-connection server daemon (10.0.0.1:36798). Mar 7 01:27:53.613001 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 36798 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:27:53.634053 sshd[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:27:53.712819 systemd-logind[1462]: New session 51 of user core. Mar 7 01:27:53.751005 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 7 01:27:54.862393 sshd[5164]: pam_unix(sshd:session): session closed for user core Mar 7 01:27:54.901336 systemd[1]: sshd@50-10.0.0.31:22-10.0.0.1:36798.service: Deactivated successfully. Mar 7 01:27:54.927384 systemd[1]: session-51.scope: Deactivated successfully. Mar 7 01:27:54.945877 systemd-logind[1462]: Session 51 logged out. Waiting for processes to exit. Mar 7 01:27:55.009831 systemd-logind[1462]: Removed session 51. Mar 7 01:28:00.326473 systemd[1]: Started sshd@51-10.0.0.31:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Mar 7 01:28:00.988506 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:01.031618 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:01.101529 systemd-logind[1462]: New session 52 of user core. Mar 7 01:28:01.134916 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 7 01:28:02.079478 sshd[5179]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:02.124527 systemd[1]: sshd@51-10.0.0.31:22-10.0.0.1:36818.service: Deactivated successfully. Mar 7 01:28:02.139116 systemd[1]: session-52.scope: Deactivated successfully. Mar 7 01:28:02.145615 systemd-logind[1462]: Session 52 logged out. Waiting for processes to exit. Mar 7 01:28:02.201094 systemd-logind[1462]: Removed session 52. Mar 7 01:28:03.789844 kubelet[2740]: E0307 01:28:03.784316 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:06.803455 kubelet[2740]: E0307 01:28:06.802438 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:07.207580 systemd[1]: Started sshd@52-10.0.0.31:22-10.0.0.1:33966.service - OpenSSH per-connection server daemon (10.0.0.1:33966). Mar 7 01:28:07.732444 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 33966 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:07.749469 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:07.800313 systemd-logind[1462]: New session 53 of user core. Mar 7 01:28:07.807704 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 7 01:28:08.594365 sshd[5195]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:08.641175 systemd[1]: sshd@52-10.0.0.31:22-10.0.0.1:33966.service: Deactivated successfully. Mar 7 01:28:08.650701 systemd[1]: session-53.scope: Deactivated successfully. Mar 7 01:28:08.667575 systemd-logind[1462]: Session 53 logged out. Waiting for processes to exit. Mar 7 01:28:08.712243 systemd-logind[1462]: Removed session 53. Mar 7 01:28:13.713228 systemd[1]: Started sshd@53-10.0.0.31:22-10.0.0.1:44040.service - OpenSSH per-connection server daemon (10.0.0.1:44040). Mar 7 01:28:13.906362 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 44040 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:13.918410 sshd[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:13.948208 systemd-logind[1462]: New session 54 of user core. Mar 7 01:28:14.016480 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 7 01:28:14.976549 sshd[5211]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:15.028676 systemd[1]: sshd@53-10.0.0.31:22-10.0.0.1:44040.service: Deactivated successfully. Mar 7 01:28:15.051721 systemd[1]: session-54.scope: Deactivated successfully. Mar 7 01:28:15.090730 systemd-logind[1462]: Session 54 logged out. Waiting for processes to exit. Mar 7 01:28:15.114338 systemd-logind[1462]: Removed session 54. Mar 7 01:28:16.794092 kubelet[2740]: E0307 01:28:16.784233 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:18.797689 kubelet[2740]: E0307 01:28:18.787093 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:20.074634 systemd[1]: Started sshd@54-10.0.0.31:22-10.0.0.1:44070.service - OpenSSH per-connection server daemon (10.0.0.1:44070). Mar 7 01:28:20.347295 sshd[5227]: Accepted publickey for core from 10.0.0.1 port 44070 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:20.370821 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:20.464534 systemd-logind[1462]: New session 55 of user core. Mar 7 01:28:20.514190 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 7 01:28:21.433453 sshd[5227]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:21.478349 systemd[1]: sshd@54-10.0.0.31:22-10.0.0.1:44070.service: Deactivated successfully. Mar 7 01:28:21.515090 systemd[1]: session-55.scope: Deactivated successfully. Mar 7 01:28:21.524514 systemd-logind[1462]: Session 55 logged out. Waiting for processes to exit. Mar 7 01:28:21.535067 systemd-logind[1462]: Removed session 55. Mar 7 01:28:21.785037 kubelet[2740]: E0307 01:28:21.784503 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:26.625432 systemd[1]: Started sshd@55-10.0.0.31:22-10.0.0.1:40170.service - OpenSSH per-connection server daemon (10.0.0.1:40170). Mar 7 01:28:26.924525 sshd[5241]: Accepted publickey for core from 10.0.0.1 port 40170 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:26.923705 sshd[5241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:26.982500 systemd-logind[1462]: New session 56 of user core. Mar 7 01:28:27.009478 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 7 01:28:27.938552 sshd[5241]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:27.985551 systemd[1]: sshd@55-10.0.0.31:22-10.0.0.1:40170.service: Deactivated successfully. Mar 7 01:28:27.997169 systemd[1]: session-56.scope: Deactivated successfully. Mar 7 01:28:28.007538 systemd-logind[1462]: Session 56 logged out. Waiting for processes to exit. Mar 7 01:28:28.015601 systemd-logind[1462]: Removed session 56. Mar 7 01:28:33.090608 systemd[1]: Started sshd@56-10.0.0.31:22-10.0.0.1:46746.service - OpenSSH per-connection server daemon (10.0.0.1:46746). Mar 7 01:28:33.616159 sshd[5255]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:33.625585 sshd[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:33.711246 systemd-logind[1462]: New session 57 of user core. Mar 7 01:28:33.765215 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 7 01:28:34.369700 sshd[5255]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:34.419346 systemd[1]: sshd@56-10.0.0.31:22-10.0.0.1:46746.service: Deactivated successfully. Mar 7 01:28:34.422638 systemd[1]: session-57.scope: Deactivated successfully. Mar 7 01:28:34.433213 systemd-logind[1462]: Session 57 logged out. Waiting for processes to exit. Mar 7 01:28:34.441378 systemd-logind[1462]: Removed session 57. Mar 7 01:28:40.368897 systemd[1]: Started sshd@57-10.0.0.31:22-10.0.0.1:46808.service - OpenSSH per-connection server daemon (10.0.0.1:46808). Mar 7 01:28:40.962540 sshd[5270]: Accepted publickey for core from 10.0.0.1 port 46808 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:40.991469 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:41.121347 systemd-logind[1462]: New session 58 of user core. Mar 7 01:28:41.147587 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 7 01:28:42.237864 sshd[5270]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:42.272325 systemd[1]: sshd@57-10.0.0.31:22-10.0.0.1:46808.service: Deactivated successfully. Mar 7 01:28:42.291492 systemd[1]: session-58.scope: Deactivated successfully. Mar 7 01:28:42.538729 systemd-logind[1462]: Session 58 logged out. Waiting for processes to exit. Mar 7 01:28:42.596639 systemd-logind[1462]: Removed session 58. Mar 7 01:28:44.803779 kubelet[2740]: E0307 01:28:44.781767 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:28:47.410271 systemd[1]: Started sshd@58-10.0.0.31:22-10.0.0.1:36534.service - OpenSSH per-connection server daemon (10.0.0.1:36534). Mar 7 01:28:47.748909 sshd[5287]: Accepted publickey for core from 10.0.0.1 port 36534 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:47.779463 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:47.829557 systemd-logind[1462]: New session 59 of user core. Mar 7 01:28:47.849063 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 7 01:28:48.527333 sshd[5287]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:48.560796 systemd[1]: sshd@58-10.0.0.31:22-10.0.0.1:36534.service: Deactivated successfully. Mar 7 01:28:48.570865 systemd[1]: session-59.scope: Deactivated successfully. Mar 7 01:28:48.590613 systemd-logind[1462]: Session 59 logged out. Waiting for processes to exit. Mar 7 01:28:48.607347 systemd-logind[1462]: Removed session 59. Mar 7 01:28:53.634862 systemd[1]: Started sshd@59-10.0.0.31:22-10.0.0.1:40122.service - OpenSSH per-connection server daemon (10.0.0.1:40122). Mar 7 01:28:54.006778 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 40122 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:28:54.011836 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:28:54.057722 systemd-logind[1462]: New session 60 of user core. Mar 7 01:28:54.089885 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 7 01:28:55.173259 sshd[5301]: pam_unix(sshd:session): session closed for user core Mar 7 01:28:55.234853 systemd[1]: sshd@59-10.0.0.31:22-10.0.0.1:40122.service: Deactivated successfully. Mar 7 01:28:55.268352 systemd[1]: session-60.scope: Deactivated successfully. Mar 7 01:28:55.274086 systemd-logind[1462]: Session 60 logged out. Waiting for processes to exit. Mar 7 01:28:55.287387 systemd-logind[1462]: Removed session 60. Mar 7 01:29:00.319873 systemd[1]: Started sshd@60-10.0.0.31:22-10.0.0.1:59812.service - OpenSSH per-connection server daemon (10.0.0.1:59812). Mar 7 01:29:00.569082 sshd[5316]: Accepted publickey for core from 10.0.0.1 port 59812 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:00.601025 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:00.644064 systemd-logind[1462]: New session 61 of user core. Mar 7 01:29:00.669158 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 7 01:29:00.798189 kubelet[2740]: E0307 01:29:00.794559 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:01.470406 sshd[5316]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:01.500698 systemd[1]: sshd@60-10.0.0.31:22-10.0.0.1:59812.service: Deactivated successfully. Mar 7 01:29:01.511811 systemd[1]: session-61.scope: Deactivated successfully. Mar 7 01:29:01.533185 systemd-logind[1462]: Session 61 logged out. Waiting for processes to exit. Mar 7 01:29:01.561020 systemd[1]: Started sshd@61-10.0.0.31:22-10.0.0.1:59816.service - OpenSSH per-connection server daemon (10.0.0.1:59816). Mar 7 01:29:01.566652 systemd-logind[1462]: Removed session 61. Mar 7 01:29:01.652578 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 59816 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:01.661089 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:01.696701 systemd-logind[1462]: New session 62 of user core. Mar 7 01:29:01.713233 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 7 01:29:03.243725 sshd[5332]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:03.301668 systemd[1]: sshd@61-10.0.0.31:22-10.0.0.1:59816.service: Deactivated successfully. Mar 7 01:29:03.320137 systemd[1]: session-62.scope: Deactivated successfully. Mar 7 01:29:03.348908 systemd-logind[1462]: Session 62 logged out. Waiting for processes to exit. Mar 7 01:29:03.423673 systemd[1]: Started sshd@62-10.0.0.31:22-10.0.0.1:59826.service - OpenSSH per-connection server daemon (10.0.0.1:59826). Mar 7 01:29:03.433040 systemd-logind[1462]: Removed session 62. Mar 7 01:29:03.660112 sshd[5345]: Accepted publickey for core from 10.0.0.1 port 59826 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:03.661897 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:03.709255 systemd-logind[1462]: New session 63 of user core. Mar 7 01:29:03.727468 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 7 01:29:07.334081 sshd[5345]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:07.419220 systemd[1]: sshd@62-10.0.0.31:22-10.0.0.1:59826.service: Deactivated successfully. Mar 7 01:29:07.452823 systemd[1]: session-63.scope: Deactivated successfully. Mar 7 01:29:07.453672 systemd[1]: session-63.scope: Consumed 1.391s CPU time. Mar 7 01:29:07.463009 systemd-logind[1462]: Session 63 logged out. Waiting for processes to exit. Mar 7 01:29:07.538880 systemd[1]: Started sshd@63-10.0.0.31:22-10.0.0.1:59842.service - OpenSSH per-connection server daemon (10.0.0.1:59842). Mar 7 01:29:07.655272 systemd-logind[1462]: Removed session 63. Mar 7 01:29:07.924825 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 59842 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:07.927198 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:07.989169 systemd-logind[1462]: New session 64 of user core. Mar 7 01:29:07.997324 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 7 01:29:08.792517 kubelet[2740]: E0307 01:29:08.791734 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:10.197297 sshd[5373]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:10.320767 systemd[1]: sshd@63-10.0.0.31:22-10.0.0.1:59842.service: Deactivated successfully. Mar 7 01:29:10.339728 systemd[1]: session-64.scope: Deactivated successfully. Mar 7 01:29:10.357858 systemd-logind[1462]: Session 64 logged out. Waiting for processes to exit. Mar 7 01:29:10.448805 systemd[1]: Started sshd@64-10.0.0.31:22-10.0.0.1:32834.service - OpenSSH per-connection server daemon (10.0.0.1:32834). Mar 7 01:29:10.457715 systemd-logind[1462]: Removed session 64. Mar 7 01:29:10.800195 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 32834 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:10.838738 sshd[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:10.942795 systemd-logind[1462]: New session 65 of user core. Mar 7 01:29:10.979130 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 7 01:29:11.669249 sshd[5388]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:11.701575 systemd[1]: sshd@64-10.0.0.31:22-10.0.0.1:32834.service: Deactivated successfully. Mar 7 01:29:11.713664 systemd[1]: session-65.scope: Deactivated successfully. Mar 7 01:29:11.716768 systemd-logind[1462]: Session 65 logged out. Waiting for processes to exit. Mar 7 01:29:11.722129 systemd-logind[1462]: Removed session 65. Mar 7 01:29:16.761991 systemd[1]: Started sshd@65-10.0.0.31:22-10.0.0.1:32842.service - OpenSSH per-connection server daemon (10.0.0.1:32842). Mar 7 01:29:16.802570 kubelet[2740]: E0307 01:29:16.802038 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:17.210331 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 32842 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:17.221882 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:17.258865 systemd-logind[1462]: New session 66 of user core. Mar 7 01:29:17.333846 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 7 01:29:18.470754 sshd[5403]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:18.552889 systemd[1]: sshd@65-10.0.0.31:22-10.0.0.1:32842.service: Deactivated successfully. Mar 7 01:29:18.569397 systemd[1]: session-66.scope: Deactivated successfully. Mar 7 01:29:18.607667 systemd-logind[1462]: Session 66 logged out. Waiting for processes to exit. Mar 7 01:29:18.619822 systemd-logind[1462]: Removed session 66. Mar 7 01:29:19.825073 kubelet[2740]: E0307 01:29:19.824839 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:23.556228 systemd[1]: Started sshd@66-10.0.0.31:22-10.0.0.1:59230.service - OpenSSH per-connection server daemon (10.0.0.1:59230). Mar 7 01:29:23.951649 sshd[5417]: Accepted publickey for core from 10.0.0.1 port 59230 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:23.954239 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:24.115134 systemd-logind[1462]: New session 67 of user core. Mar 7 01:29:24.137194 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 7 01:29:25.405912 sshd[5417]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:25.460756 systemd[1]: sshd@66-10.0.0.31:22-10.0.0.1:59230.service: Deactivated successfully. Mar 7 01:29:25.530752 systemd[1]: session-67.scope: Deactivated successfully. Mar 7 01:29:25.550779 systemd-logind[1462]: Session 67 logged out. Waiting for processes to exit. Mar 7 01:29:25.601807 systemd-logind[1462]: Removed session 67. Mar 7 01:29:25.799726 kubelet[2740]: E0307 01:29:25.798142 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:30.614900 systemd[1]: Started sshd@67-10.0.0.31:22-10.0.0.1:41154.service - OpenSSH per-connection server daemon (10.0.0.1:41154). Mar 7 01:29:31.023273 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 41154 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:31.038833 sshd[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:31.122846 systemd-logind[1462]: New session 68 of user core. Mar 7 01:29:31.154238 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 7 01:29:31.934500 kubelet[2740]: E0307 01:29:31.934372 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:32.180172 sshd[5432]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:32.214834 systemd-logind[1462]: Session 68 logged out. Waiting for processes to exit. Mar 7 01:29:32.218610 systemd[1]: sshd@67-10.0.0.31:22-10.0.0.1:41154.service: Deactivated successfully. Mar 7 01:29:32.245677 systemd[1]: session-68.scope: Deactivated successfully. Mar 7 01:29:32.255355 systemd-logind[1462]: Removed session 68. Mar 7 01:29:32.783853 kubelet[2740]: E0307 01:29:32.778409 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:29:39.854544 systemd[1]: Started sshd@68-10.0.0.31:22-10.0.0.1:41168.service - OpenSSH per-connection server daemon (10.0.0.1:41168). Mar 7 01:29:40.827772 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 41168 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:40.839204 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:41.201662 systemd-logind[1462]: New session 69 of user core. Mar 7 01:29:41.219835 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 7 01:29:41.920808 sshd[5447]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:41.947681 systemd[1]: sshd@68-10.0.0.31:22-10.0.0.1:41168.service: Deactivated successfully. Mar 7 01:29:41.960421 systemd[1]: session-69.scope: Deactivated successfully. Mar 7 01:29:41.968507 systemd-logind[1462]: Session 69 logged out. Waiting for processes to exit. Mar 7 01:29:41.999859 systemd-logind[1462]: Removed session 69. Mar 7 01:29:47.013547 systemd[1]: Started sshd@69-10.0.0.31:22-10.0.0.1:44874.service - OpenSSH per-connection server daemon (10.0.0.1:44874). Mar 7 01:29:47.374586 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 44874 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:47.394144 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:47.450550 systemd-logind[1462]: New session 70 of user core. Mar 7 01:29:47.474629 systemd[1]: Started session-70.scope - Session 70 of User core. Mar 7 01:29:47.969741 sshd[5464]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:48.009235 systemd[1]: sshd@69-10.0.0.31:22-10.0.0.1:44874.service: Deactivated successfully. Mar 7 01:29:48.019143 systemd[1]: session-70.scope: Deactivated successfully. Mar 7 01:29:48.032764 systemd-logind[1462]: Session 70 logged out. Waiting for processes to exit. Mar 7 01:29:48.039173 systemd-logind[1462]: Removed session 70. Mar 7 01:29:53.805208 systemd[1]: Started sshd@70-10.0.0.31:22-10.0.0.1:52568.service - OpenSSH per-connection server daemon (10.0.0.1:52568). Mar 7 01:29:53.905452 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 52568 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:29:53.919159 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:29:55.120391 systemd-logind[1462]: New session 71 of user core. Mar 7 01:29:55.227076 systemd[1]: Started session-71.scope - Session 71 of User core. Mar 7 01:29:56.057364 sshd[5479]: pam_unix(sshd:session): session closed for user core Mar 7 01:29:56.099897 systemd[1]: sshd@70-10.0.0.31:22-10.0.0.1:52568.service: Deactivated successfully. Mar 7 01:29:56.124148 systemd[1]: session-71.scope: Deactivated successfully. Mar 7 01:29:56.138175 systemd-logind[1462]: Session 71 logged out. Waiting for processes to exit. Mar 7 01:29:56.160265 systemd-logind[1462]: Removed session 71. Mar 7 01:29:58.776474 kubelet[2740]: E0307 01:29:58.774595 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:01.155289 systemd[1]: Started sshd@71-10.0.0.31:22-10.0.0.1:37122.service - OpenSSH per-connection server daemon (10.0.0.1:37122). Mar 7 01:30:01.373869 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 37122 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:01.384349 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:01.422884 systemd-logind[1462]: New session 72 of user core. Mar 7 01:30:01.473146 systemd[1]: Started session-72.scope - Session 72 of User core. Mar 7 01:30:02.060219 sshd[5494]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:02.075870 systemd[1]: sshd@71-10.0.0.31:22-10.0.0.1:37122.service: Deactivated successfully. Mar 7 01:30:02.087509 systemd[1]: session-72.scope: Deactivated successfully. Mar 7 01:30:02.108111 systemd-logind[1462]: Session 72 logged out. Waiting for processes to exit. Mar 7 01:30:02.128986 systemd-logind[1462]: Removed session 72. Mar 7 01:30:07.201258 systemd[1]: Started sshd@72-10.0.0.31:22-10.0.0.1:37124.service - OpenSSH per-connection server daemon (10.0.0.1:37124). Mar 7 01:30:07.621377 sshd[5511]: Accepted publickey for core from 10.0.0.1 port 37124 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:07.624310 sshd[5511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:07.652583 systemd-logind[1462]: New session 73 of user core. Mar 7 01:30:07.755454 systemd[1]: Started session-73.scope - Session 73 of User core. Mar 7 01:30:08.668548 sshd[5511]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:08.694575 systemd[1]: sshd@72-10.0.0.31:22-10.0.0.1:37124.service: Deactivated successfully. Mar 7 01:30:08.707586 systemd[1]: session-73.scope: Deactivated successfully. Mar 7 01:30:08.723373 systemd-logind[1462]: Session 73 logged out. Waiting for processes to exit. Mar 7 01:30:08.741805 systemd-logind[1462]: Removed session 73. Mar 7 01:30:08.791878 kubelet[2740]: E0307 01:30:08.779152 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:09.791733 kubelet[2740]: E0307 01:30:09.791207 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:13.817340 systemd[1]: Started sshd@73-10.0.0.31:22-10.0.0.1:38684.service - OpenSSH per-connection server daemon (10.0.0.1:38684). Mar 7 01:30:14.257533 sshd[5527]: Accepted publickey for core from 10.0.0.1 port 38684 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:14.285375 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:14.355364 systemd-logind[1462]: New session 74 of user core. Mar 7 01:30:14.426812 systemd[1]: Started session-74.scope - Session 74 of User core. Mar 7 01:30:15.245654 sshd[5527]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:15.269997 systemd[1]: sshd@73-10.0.0.31:22-10.0.0.1:38684.service: Deactivated successfully. Mar 7 01:30:15.297051 systemd[1]: session-74.scope: Deactivated successfully. Mar 7 01:30:15.307215 systemd-logind[1462]: Session 74 logged out. Waiting for processes to exit. Mar 7 01:30:15.315223 systemd-logind[1462]: Removed session 74. Mar 7 01:30:20.363547 systemd[1]: Started sshd@74-10.0.0.31:22-10.0.0.1:35984.service - OpenSSH per-connection server daemon (10.0.0.1:35984). Mar 7 01:30:25.915193 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 35984 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:25.969636 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:26.104128 systemd-logind[1462]: New session 75 of user core. Mar 7 01:30:26.153406 systemd[1]: Started session-75.scope - Session 75 of User core. Mar 7 01:30:35.375420 kubelet[2740]: E0307 01:30:35.375274 2740 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.54s" Mar 7 01:30:37.320893 kubelet[2740]: E0307 01:30:37.245813 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:37.932732 sshd[5542]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:37.959146 systemd[1]: sshd@74-10.0.0.31:22-10.0.0.1:35984.service: Deactivated successfully. Mar 7 01:30:38.014479 systemd[1]: session-75.scope: Deactivated successfully. Mar 7 01:30:38.018149 systemd[1]: session-75.scope: Consumed 2.712s CPU time. Mar 7 01:30:38.045488 systemd-logind[1462]: Session 75 logged out. Waiting for processes to exit. Mar 7 01:30:38.059592 systemd-logind[1462]: Removed session 75. Mar 7 01:30:38.801990 kubelet[2740]: E0307 01:30:38.800801 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:41.807908 kubelet[2740]: E0307 01:30:41.806981 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:43.059322 systemd[1]: Started sshd@75-10.0.0.31:22-10.0.0.1:43744.service - OpenSSH per-connection server daemon (10.0.0.1:43744). Mar 7 01:30:43.430904 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 43744 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:43.456685 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:43.514623 systemd-logind[1462]: New session 76 of user core. Mar 7 01:30:43.545110 systemd[1]: Started session-76.scope - Session 76 of User core. Mar 7 01:30:43.799118 kubelet[2740]: E0307 01:30:43.798467 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:44.063670 sshd[5560]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:44.069153 systemd[1]: sshd@75-10.0.0.31:22-10.0.0.1:43744.service: Deactivated successfully. Mar 7 01:30:44.074707 systemd[1]: session-76.scope: Deactivated successfully. Mar 7 01:30:44.094136 systemd-logind[1462]: Session 76 logged out. Waiting for processes to exit. Mar 7 01:30:44.099194 systemd-logind[1462]: Removed session 76. Mar 7 01:30:49.159485 systemd[1]: Started sshd@76-10.0.0.31:22-10.0.0.1:43754.service - OpenSSH per-connection server daemon (10.0.0.1:43754). Mar 7 01:30:49.288511 sshd[5577]: Accepted publickey for core from 10.0.0.1 port 43754 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:49.294072 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:49.332213 systemd-logind[1462]: New session 77 of user core. Mar 7 01:30:49.342591 systemd[1]: Started session-77.scope - Session 77 of User core. Mar 7 01:30:50.005458 sshd[5577]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:50.016848 systemd[1]: sshd@76-10.0.0.31:22-10.0.0.1:43754.service: Deactivated successfully. Mar 7 01:30:50.047104 systemd[1]: session-77.scope: Deactivated successfully. Mar 7 01:30:50.049802 systemd-logind[1462]: Session 77 logged out. Waiting for processes to exit. Mar 7 01:30:50.066158 systemd-logind[1462]: Removed session 77. Mar 7 01:30:53.790058 kubelet[2740]: E0307 01:30:53.775740 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:30:55.051455 systemd[1]: Started sshd@77-10.0.0.31:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Mar 7 01:30:55.362679 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:30:55.391691 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:30:55.522144 systemd-logind[1462]: New session 78 of user core. Mar 7 01:30:55.611293 systemd[1]: Started session-78.scope - Session 78 of User core. Mar 7 01:30:56.235601 sshd[5592]: pam_unix(sshd:session): session closed for user core Mar 7 01:30:56.250006 systemd[1]: sshd@77-10.0.0.31:22-10.0.0.1:34856.service: Deactivated successfully. Mar 7 01:30:56.259863 systemd[1]: session-78.scope: Deactivated successfully. Mar 7 01:30:56.274418 systemd-logind[1462]: Session 78 logged out. Waiting for processes to exit. Mar 7 01:30:56.277680 systemd-logind[1462]: Removed session 78. Mar 7 01:31:01.298718 systemd[1]: Started sshd@78-10.0.0.31:22-10.0.0.1:42670.service - OpenSSH per-connection server daemon (10.0.0.1:42670). Mar 7 01:31:01.513166 sshd[5607]: Accepted publickey for core from 10.0.0.1 port 42670 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:01.521622 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:01.554034 systemd-logind[1462]: New session 79 of user core. Mar 7 01:31:01.570035 systemd[1]: Started session-79.scope - Session 79 of User core. Mar 7 01:31:02.056483 sshd[5607]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:02.073740 systemd[1]: sshd@78-10.0.0.31:22-10.0.0.1:42670.service: Deactivated successfully. Mar 7 01:31:02.083223 systemd[1]: session-79.scope: Deactivated successfully. Mar 7 01:31:02.093764 systemd-logind[1462]: Session 79 logged out. Waiting for processes to exit. Mar 7 01:31:02.111703 systemd-logind[1462]: Removed session 79. Mar 7 01:31:07.121662 systemd[1]: Started sshd@79-10.0.0.31:22-10.0.0.1:42672.service - OpenSSH per-connection server daemon (10.0.0.1:42672). Mar 7 01:31:07.252553 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 42672 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:07.260672 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:07.320518 systemd-logind[1462]: New session 80 of user core. Mar 7 01:31:07.331322 systemd[1]: Started session-80.scope - Session 80 of User core. Mar 7 01:31:07.845655 sshd[5625]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:07.872461 systemd[1]: sshd@79-10.0.0.31:22-10.0.0.1:42672.service: Deactivated successfully. Mar 7 01:31:07.881510 systemd[1]: session-80.scope: Deactivated successfully. Mar 7 01:31:07.897681 systemd-logind[1462]: Session 80 logged out. Waiting for processes to exit. Mar 7 01:31:07.909166 systemd-logind[1462]: Removed session 80. Mar 7 01:31:11.798074 kubelet[2740]: E0307 01:31:11.792214 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:12.914562 systemd[1]: Started sshd@80-10.0.0.31:22-10.0.0.1:44408.service - OpenSSH per-connection server daemon (10.0.0.1:44408). Mar 7 01:31:13.058295 sshd[5640]: Accepted publickey for core from 10.0.0.1 port 44408 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:13.064303 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:13.102258 systemd-logind[1462]: New session 81 of user core. Mar 7 01:31:13.118500 systemd[1]: Started session-81.scope - Session 81 of User core. Mar 7 01:31:13.739372 sshd[5640]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:13.788455 systemd[1]: sshd@80-10.0.0.31:22-10.0.0.1:44408.service: Deactivated successfully. Mar 7 01:31:13.817640 systemd[1]: session-81.scope: Deactivated successfully. Mar 7 01:31:13.826746 systemd-logind[1462]: Session 81 logged out. Waiting for processes to exit. Mar 7 01:31:13.843192 systemd-logind[1462]: Removed session 81. Mar 7 01:31:18.809539 systemd[1]: Started sshd@81-10.0.0.31:22-10.0.0.1:44414.service - OpenSSH per-connection server daemon (10.0.0.1:44414). Mar 7 01:31:18.950031 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 44414 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:18.968883 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:19.030911 systemd-logind[1462]: New session 82 of user core. Mar 7 01:31:19.069519 systemd[1]: Started session-82.scope - Session 82 of User core. Mar 7 01:31:19.629472 sshd[5655]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:19.654895 systemd[1]: sshd@81-10.0.0.31:22-10.0.0.1:44414.service: Deactivated successfully. Mar 7 01:31:19.661680 systemd[1]: session-82.scope: Deactivated successfully. Mar 7 01:31:19.665244 systemd-logind[1462]: Session 82 logged out. Waiting for processes to exit. Mar 7 01:31:19.693874 systemd[1]: Started sshd@82-10.0.0.31:22-10.0.0.1:44416.service - OpenSSH per-connection server daemon (10.0.0.1:44416). Mar 7 01:31:19.717524 systemd-logind[1462]: Removed session 82. Mar 7 01:31:19.867339 sshd[5669]: Accepted publickey for core from 10.0.0.1 port 44416 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:19.870750 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:19.955751 systemd-logind[1462]: New session 83 of user core. Mar 7 01:31:19.970910 systemd[1]: Started session-83.scope - Session 83 of User core. Mar 7 01:31:23.792219 kubelet[2740]: E0307 01:31:23.791465 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:24.031659 containerd[1478]: time="2026-03-07T01:31:24.025568922Z" level=info msg="StopContainer for \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\" with timeout 30 (s)" Mar 7 01:31:24.062745 containerd[1478]: time="2026-03-07T01:31:24.062122927Z" level=info msg="Stop container \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\" with signal terminated" Mar 7 01:31:24.407494 systemd[1]: run-containerd-runc-k8s.io-0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56-runc.ectAfS.mount: Deactivated successfully. Mar 7 01:31:24.422619 systemd[1]: cri-containerd-314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96.scope: Deactivated successfully. Mar 7 01:31:24.423192 systemd[1]: cri-containerd-314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96.scope: Consumed 6.156s CPU time. Mar 7 01:31:24.732470 containerd[1478]: time="2026-03-07T01:31:24.728561785Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 01:31:24.799463 kubelet[2740]: E0307 01:31:24.797618 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:24.851675 containerd[1478]: time="2026-03-07T01:31:24.850839249Z" level=info msg="StopContainer for \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\" with timeout 2 (s)" Mar 7 01:31:24.853359 containerd[1478]: time="2026-03-07T01:31:24.852743702Z" level=info msg="Stop container \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\" with signal terminated" Mar 7 01:31:24.885812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96-rootfs.mount: Deactivated successfully. Mar 7 01:31:24.908073 systemd-networkd[1397]: lxc_health: Link DOWN Mar 7 01:31:24.908085 systemd-networkd[1397]: lxc_health: Lost carrier Mar 7 01:31:24.987319 containerd[1478]: time="2026-03-07T01:31:24.986200739Z" level=info msg="shim disconnected" id=314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96 namespace=k8s.io Mar 7 01:31:24.987319 containerd[1478]: time="2026-03-07T01:31:24.986296598Z" level=warning msg="cleaning up after shim disconnected" id=314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96 namespace=k8s.io Mar 7 01:31:24.987319 containerd[1478]: time="2026-03-07T01:31:24.986316896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:25.065669 systemd[1]: cri-containerd-0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56.scope: Deactivated successfully. Mar 7 01:31:25.068314 systemd[1]: cri-containerd-0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56.scope: Consumed 39.471s CPU time. Mar 7 01:31:25.109362 containerd[1478]: time="2026-03-07T01:31:25.109219537Z" level=info msg="StopContainer for \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\" returns successfully" Mar 7 01:31:25.120707 containerd[1478]: time="2026-03-07T01:31:25.117219484Z" level=info msg="StopPodSandbox for \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\"" Mar 7 01:31:25.120707 containerd[1478]: time="2026-03-07T01:31:25.117330030Z" level=info msg="Container to stop \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:31:25.123900 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086-shm.mount: Deactivated successfully. Mar 7 01:31:25.162429 systemd[1]: cri-containerd-19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086.scope: Deactivated successfully. Mar 7 01:31:25.194718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56-rootfs.mount: Deactivated successfully. Mar 7 01:31:25.239085 containerd[1478]: time="2026-03-07T01:31:25.235329374Z" level=info msg="shim disconnected" id=0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56 namespace=k8s.io Mar 7 01:31:25.239085 containerd[1478]: time="2026-03-07T01:31:25.235419922Z" level=warning msg="cleaning up after shim disconnected" id=0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56 namespace=k8s.io Mar 7 01:31:25.239085 containerd[1478]: time="2026-03-07T01:31:25.235438035Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:25.331186 containerd[1478]: time="2026-03-07T01:31:25.329314800Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:31:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:31:25.343604 containerd[1478]: time="2026-03-07T01:31:25.343533044Z" level=info msg="shim disconnected" id=19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086 namespace=k8s.io Mar 7 01:31:25.344218 containerd[1478]: time="2026-03-07T01:31:25.343763054Z" level=warning msg="cleaning up after shim disconnected" id=19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086 namespace=k8s.io Mar 7 01:31:25.344218 containerd[1478]: time="2026-03-07T01:31:25.343781908Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:25.353757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086-rootfs.mount: Deactivated successfully. Mar 7 01:31:25.379683 containerd[1478]: time="2026-03-07T01:31:25.373078979Z" level=info msg="StopContainer for \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\" returns successfully" Mar 7 01:31:25.396101 containerd[1478]: time="2026-03-07T01:31:25.396037008Z" level=info msg="StopPodSandbox for \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\"" Mar 7 01:31:25.396669 containerd[1478]: time="2026-03-07T01:31:25.396148837Z" level=info msg="Container to stop \"22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:31:25.396669 containerd[1478]: time="2026-03-07T01:31:25.396179214Z" level=info msg="Container to stop \"ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:31:25.396669 containerd[1478]: time="2026-03-07T01:31:25.396208568Z" level=info msg="Container to stop \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:31:25.396669 containerd[1478]: time="2026-03-07T01:31:25.396228787Z" level=info msg="Container to stop \"3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:31:25.396669 containerd[1478]: time="2026-03-07T01:31:25.396244095Z" level=info msg="Container to stop \"1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 7 01:31:25.407121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6-shm.mount: Deactivated successfully. Mar 7 01:31:25.488413 containerd[1478]: time="2026-03-07T01:31:25.484561810Z" level=warning msg="cleanup warnings time=\"2026-03-07T01:31:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 01:31:25.488463 sshd[5669]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:25.511025 systemd[1]: sshd@82-10.0.0.31:22-10.0.0.1:44416.service: Deactivated successfully. Mar 7 01:31:25.515904 systemd[1]: session-83.scope: Deactivated successfully. Mar 7 01:31:25.516610 containerd[1478]: time="2026-03-07T01:31:25.516570907Z" level=info msg="TearDown network for sandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" successfully" Mar 7 01:31:25.518423 systemd[1]: session-83.scope: Consumed 1.509s CPU time. Mar 7 01:31:25.518878 systemd[1]: cri-containerd-8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6.scope: Deactivated successfully. Mar 7 01:31:25.521487 containerd[1478]: time="2026-03-07T01:31:25.521426463Z" level=info msg="StopPodSandbox for \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" returns successfully" Mar 7 01:31:25.525303 systemd-logind[1462]: Session 83 logged out. Waiting for processes to exit. Mar 7 01:31:25.549041 systemd[1]: Started sshd@83-10.0.0.31:22-10.0.0.1:40726.service - OpenSSH per-connection server daemon (10.0.0.1:40726). Mar 7 01:31:25.557445 systemd-logind[1462]: Removed session 83. Mar 7 01:31:25.558645 kubelet[2740]: E0307 01:31:25.558323 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:31:25.596717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6-rootfs.mount: Deactivated successfully. Mar 7 01:31:25.644996 containerd[1478]: time="2026-03-07T01:31:25.644060562Z" level=info msg="shim disconnected" id=8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6 namespace=k8s.io Mar 7 01:31:25.644996 containerd[1478]: time="2026-03-07T01:31:25.644115244Z" level=warning msg="cleaning up after shim disconnected" id=8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6 namespace=k8s.io Mar 7 01:31:25.644996 containerd[1478]: time="2026-03-07T01:31:25.644129190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:25.713114 containerd[1478]: time="2026-03-07T01:31:25.712883258Z" level=info msg="TearDown network for sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" successfully" Mar 7 01:31:25.713114 containerd[1478]: time="2026-03-07T01:31:25.713112856Z" level=info msg="StopPodSandbox for \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" returns successfully" Mar 7 01:31:25.728544 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 40726 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:25.739332 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:25.745567 kubelet[2740]: I0307 01:31:25.745376 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b3fe19e-aafe-42f5-be8f-025558c799ca-cilium-config-path\") pod \"8b3fe19e-aafe-42f5-be8f-025558c799ca\" (UID: \"8b3fe19e-aafe-42f5-be8f-025558c799ca\") " Mar 7 01:31:25.745841 kubelet[2740]: I0307 01:31:25.745574 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bs77\" (UniqueName: \"kubernetes.io/projected/8b3fe19e-aafe-42f5-be8f-025558c799ca-kube-api-access-2bs77\") pod \"8b3fe19e-aafe-42f5-be8f-025558c799ca\" (UID: \"8b3fe19e-aafe-42f5-be8f-025558c799ca\") " Mar 7 01:31:25.758374 systemd-logind[1462]: New session 84 of user core. Mar 7 01:31:25.762798 kubelet[2740]: I0307 01:31:25.761731 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3fe19e-aafe-42f5-be8f-025558c799ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8b3fe19e-aafe-42f5-be8f-025558c799ca" (UID: "8b3fe19e-aafe-42f5-be8f-025558c799ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:31:25.783369 systemd[1]: Started session-84.scope - Session 84 of User core. Mar 7 01:31:25.795272 kubelet[2740]: I0307 01:31:25.793880 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3fe19e-aafe-42f5-be8f-025558c799ca-kube-api-access-2bs77" (OuterVolumeSpecName: "kube-api-access-2bs77") pod "8b3fe19e-aafe-42f5-be8f-025558c799ca" (UID: "8b3fe19e-aafe-42f5-be8f-025558c799ca"). InnerVolumeSpecName "kube-api-access-2bs77". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:31:25.800683 systemd[1]: var-lib-kubelet-pods-8b3fe19e\x2daafe\x2d42f5\x2dbe8f\x2d025558c799ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2bs77.mount: Deactivated successfully. Mar 7 01:31:25.840596 systemd[1]: Removed slice kubepods-besteffort-pod8b3fe19e_aafe_42f5_be8f_025558c799ca.slice - libcontainer container kubepods-besteffort-pod8b3fe19e_aafe_42f5_be8f_025558c799ca.slice. Mar 7 01:31:25.840760 systemd[1]: kubepods-besteffort-pod8b3fe19e_aafe_42f5_be8f_025558c799ca.slice: Consumed 6.353s CPU time. Mar 7 01:31:25.846903 kubelet[2740]: I0307 01:31:25.845883 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-bpf-maps\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.846903 kubelet[2740]: I0307 01:31:25.846082 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-etc-cni-netd\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.846903 kubelet[2740]: I0307 01:31:25.846112 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-hostproc\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.846903 kubelet[2740]: I0307 01:31:25.846130 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-xtables-lock\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.846903 kubelet[2740]: I0307 01:31:25.846155 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-run\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.846903 kubelet[2740]: I0307 01:31:25.846192 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-hubble-tls\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.849809 kubelet[2740]: I0307 01:31:25.846219 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-lib-modules\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.849809 kubelet[2740]: I0307 01:31:25.846328 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-kernel\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.849809 kubelet[2740]: I0307 01:31:25.846360 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cni-path\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.849809 kubelet[2740]: I0307 01:31:25.846385 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-config-path\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.849809 kubelet[2740]: I0307 01:31:25.846405 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0367950-f8de-4cea-8cbc-20a8d9150e54-clustermesh-secrets\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.849809 kubelet[2740]: I0307 01:31:25.846423 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-net\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.850186 kubelet[2740]: I0307 01:31:25.846447 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-cgroup\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.850186 kubelet[2740]: I0307 01:31:25.846475 2740 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsvjs\" (UniqueName: \"kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-kube-api-access-nsvjs\") pod \"c0367950-f8de-4cea-8cbc-20a8d9150e54\" (UID: \"c0367950-f8de-4cea-8cbc-20a8d9150e54\") " Mar 7 01:31:25.850186 kubelet[2740]: I0307 01:31:25.846530 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8b3fe19e-aafe-42f5-be8f-025558c799ca-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.850186 kubelet[2740]: I0307 01:31:25.846546 2740 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2bs77\" (UniqueName: \"kubernetes.io/projected/8b3fe19e-aafe-42f5-be8f-025558c799ca-kube-api-access-2bs77\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.850186 kubelet[2740]: I0307 01:31:25.848352 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850186 kubelet[2740]: I0307 01:31:25.848396 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850709 kubelet[2740]: I0307 01:31:25.848417 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850709 kubelet[2740]: I0307 01:31:25.848438 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-hostproc" (OuterVolumeSpecName: "hostproc") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850709 kubelet[2740]: I0307 01:31:25.848456 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850709 kubelet[2740]: I0307 01:31:25.848474 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850709 kubelet[2740]: I0307 01:31:25.849698 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850900 kubelet[2740]: I0307 01:31:25.849783 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cni-path" (OuterVolumeSpecName: "cni-path") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850900 kubelet[2740]: I0307 01:31:25.849813 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.850900 kubelet[2740]: I0307 01:31:25.850202 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 7 01:31:25.871411 systemd[1]: var-lib-kubelet-pods-c0367950\x2df8de\x2d4cea\x2d8cbc\x2d20a8d9150e54-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 7 01:31:25.883832 kubelet[2740]: I0307 01:31:25.883769 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:31:25.885640 kubelet[2740]: I0307 01:31:25.884130 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0367950-f8de-4cea-8cbc-20a8d9150e54-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 01:31:25.895628 kubelet[2740]: I0307 01:31:25.894717 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 01:31:25.899475 kubelet[2740]: I0307 01:31:25.896593 2740 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-kube-api-access-nsvjs" (OuterVolumeSpecName: "kube-api-access-nsvjs") pod "c0367950-f8de-4cea-8cbc-20a8d9150e54" (UID: "c0367950-f8de-4cea-8cbc-20a8d9150e54"). InnerVolumeSpecName "kube-api-access-nsvjs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 01:31:25.948913 kubelet[2740]: I0307 01:31:25.948856 2740 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950605 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950634 2740 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950648 2740 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950663 2740 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950682 2740 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950703 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950719 2740 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c0367950-f8de-4cea-8cbc-20a8d9150e54-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.950840 kubelet[2740]: I0307 01:31:25.950734 2740 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.951802 kubelet[2740]: I0307 01:31:25.950751 2740 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.951802 kubelet[2740]: I0307 01:31:25.950764 2740 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsvjs\" (UniqueName: \"kubernetes.io/projected/c0367950-f8de-4cea-8cbc-20a8d9150e54-kube-api-access-nsvjs\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.951802 kubelet[2740]: I0307 01:31:25.950779 2740 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.951802 kubelet[2740]: I0307 01:31:25.950795 2740 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:25.951802 kubelet[2740]: I0307 01:31:25.950809 2740 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c0367950-f8de-4cea-8cbc-20a8d9150e54-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 7 01:31:26.363598 systemd[1]: var-lib-kubelet-pods-c0367950\x2df8de\x2d4cea\x2d8cbc\x2d20a8d9150e54-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsvjs.mount: Deactivated successfully. Mar 7 01:31:26.363807 systemd[1]: var-lib-kubelet-pods-c0367950\x2df8de\x2d4cea\x2d8cbc\x2d20a8d9150e54-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 7 01:31:26.603144 kubelet[2740]: I0307 01:31:26.602512 2740 scope.go:117] "RemoveContainer" containerID="0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56" Mar 7 01:31:26.617340 containerd[1478]: time="2026-03-07T01:31:26.616129852Z" level=info msg="RemoveContainer for \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\"" Mar 7 01:31:26.666444 containerd[1478]: time="2026-03-07T01:31:26.655903038Z" level=info msg="RemoveContainer for \"0892415e55c9535cd28023154245b52a5a8105ae62a69f07962e021a3296ae56\" returns successfully" Mar 7 01:31:26.667361 kubelet[2740]: I0307 01:31:26.667252 2740 scope.go:117] "RemoveContainer" containerID="1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20" Mar 7 01:31:26.682591 containerd[1478]: time="2026-03-07T01:31:26.675380754Z" level=info msg="RemoveContainer for \"1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20\"" Mar 7 01:31:26.685705 systemd[1]: Removed slice kubepods-burstable-podc0367950_f8de_4cea_8cbc_20a8d9150e54.slice - libcontainer container kubepods-burstable-podc0367950_f8de_4cea_8cbc_20a8d9150e54.slice. Mar 7 01:31:26.699367 systemd[1]: kubepods-burstable-podc0367950_f8de_4cea_8cbc_20a8d9150e54.slice: Consumed 40.629s CPU time. Mar 7 01:31:26.723695 containerd[1478]: time="2026-03-07T01:31:26.722509528Z" level=info msg="RemoveContainer for \"1c09cb310c03fd8dd76ede799b70aa79d311ae32a9b1e44cf56b67c5ad924d20\" returns successfully" Mar 7 01:31:26.731333 kubelet[2740]: I0307 01:31:26.728106 2740 scope.go:117] "RemoveContainer" containerID="3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41" Mar 7 01:31:26.758115 containerd[1478]: time="2026-03-07T01:31:26.753184117Z" level=info msg="RemoveContainer for \"3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41\"" Mar 7 01:31:26.829769 containerd[1478]: time="2026-03-07T01:31:26.823356156Z" level=info msg="RemoveContainer for \"3de17d6a79134d23587392fd00735b50c8f59a5dad3dcd587eba759d2c762d41\" returns successfully" Mar 7 01:31:26.835579 kubelet[2740]: I0307 01:31:26.829539 2740 scope.go:117] "RemoveContainer" containerID="22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882" Mar 7 01:31:26.839599 containerd[1478]: time="2026-03-07T01:31:26.835874423Z" level=info msg="RemoveContainer for \"22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882\"" Mar 7 01:31:26.859177 containerd[1478]: time="2026-03-07T01:31:26.858853738Z" level=info msg="RemoveContainer for \"22a3a73a0f3899ea76814b74467917573db77267cf7d1641c075af5e47db5882\" returns successfully" Mar 7 01:31:26.863186 kubelet[2740]: I0307 01:31:26.861486 2740 scope.go:117] "RemoveContainer" containerID="ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f" Mar 7 01:31:26.887470 containerd[1478]: time="2026-03-07T01:31:26.887337921Z" level=info msg="RemoveContainer for \"ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f\"" Mar 7 01:31:26.919545 containerd[1478]: time="2026-03-07T01:31:26.917572719Z" level=info msg="RemoveContainer for \"ddc565909c8e69060b0f12175897c31a6af9316c69cbfd35ef3e4afd03fbb86f\" returns successfully" Mar 7 01:31:26.924906 kubelet[2740]: I0307 01:31:26.917902 2740 scope.go:117] "RemoveContainer" containerID="314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96" Mar 7 01:31:26.936191 containerd[1478]: time="2026-03-07T01:31:26.928808215Z" level=info msg="RemoveContainer for \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\"" Mar 7 01:31:26.999174 containerd[1478]: time="2026-03-07T01:31:26.992556333Z" level=info msg="RemoveContainer for \"314228e09dd3f00246522a53ab0401ab966719c2ced2d578f3c946d428390b96\" returns successfully" Mar 7 01:31:27.792659 kubelet[2740]: I0307 01:31:27.792196 2740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3fe19e-aafe-42f5-be8f-025558c799ca" path="/var/lib/kubelet/pods/8b3fe19e-aafe-42f5-be8f-025558c799ca/volumes" Mar 7 01:31:27.793507 kubelet[2740]: I0307 01:31:27.793358 2740 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0367950-f8de-4cea-8cbc-20a8d9150e54" path="/var/lib/kubelet/pods/c0367950-f8de-4cea-8cbc-20a8d9150e54/volumes" Mar 7 01:31:28.318295 kubelet[2740]: I0307 01:31:28.317089 2740 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-07T01:31:28Z","lastTransitionTime":"2026-03-07T01:31:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 7 01:31:28.985665 sshd[5810]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:28.994805 systemd[1]: sshd@83-10.0.0.31:22-10.0.0.1:40726.service: Deactivated successfully. Mar 7 01:31:28.999589 systemd[1]: session-84.scope: Deactivated successfully. Mar 7 01:31:29.000593 systemd[1]: session-84.scope: Consumed 1.456s CPU time. Mar 7 01:31:29.014736 systemd-logind[1462]: Session 84 logged out. Waiting for processes to exit. Mar 7 01:31:29.023757 systemd-logind[1462]: Removed session 84. Mar 7 01:31:29.040780 systemd[1]: Started sshd@84-10.0.0.31:22-10.0.0.1:40730.service - OpenSSH per-connection server daemon (10.0.0.1:40730). Mar 7 01:31:29.229220 sshd[5846]: Accepted publickey for core from 10.0.0.1 port 40730 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:29.241431 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:29.269130 systemd-logind[1462]: New session 85 of user core. Mar 7 01:31:29.294412 kubelet[2740]: I0307 01:31:29.292377 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-xtables-lock\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299316 kubelet[2740]: I0307 01:31:29.295445 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a454d5e4-db42-485f-b0b4-496cc99162cb-cilium-config-path\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299316 kubelet[2740]: I0307 01:31:29.295554 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-cilium-run\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299316 kubelet[2740]: I0307 01:31:29.295583 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-etc-cni-netd\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299316 kubelet[2740]: I0307 01:31:29.295603 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-host-proc-sys-net\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299316 kubelet[2740]: I0307 01:31:29.295631 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-hostproc\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299316 kubelet[2740]: I0307 01:31:29.295870 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfpf6\" (UniqueName: \"kubernetes.io/projected/a454d5e4-db42-485f-b0b4-496cc99162cb-kube-api-access-cfpf6\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299635 kubelet[2740]: I0307 01:31:29.295907 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a454d5e4-db42-485f-b0b4-496cc99162cb-clustermesh-secrets\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299635 kubelet[2740]: I0307 01:31:29.296042 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a454d5e4-db42-485f-b0b4-496cc99162cb-cilium-ipsec-secrets\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299635 kubelet[2740]: I0307 01:31:29.296073 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-host-proc-sys-kernel\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299635 kubelet[2740]: I0307 01:31:29.296095 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-cilium-cgroup\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299635 kubelet[2740]: I0307 01:31:29.296114 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-cni-path\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299635 kubelet[2740]: I0307 01:31:29.296133 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-lib-modules\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299828 kubelet[2740]: I0307 01:31:29.296158 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a454d5e4-db42-485f-b0b4-496cc99162cb-hubble-tls\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.299828 kubelet[2740]: I0307 01:31:29.296176 2740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a454d5e4-db42-485f-b0b4-496cc99162cb-bpf-maps\") pod \"cilium-9fh28\" (UID: \"a454d5e4-db42-485f-b0b4-496cc99162cb\") " pod="kube-system/cilium-9fh28" Mar 7 01:31:29.315254 systemd[1]: Started session-85.scope - Session 85 of User core. Mar 7 01:31:29.346629 systemd[1]: Created slice kubepods-burstable-poda454d5e4_db42_485f_b0b4_496cc99162cb.slice - libcontainer container kubepods-burstable-poda454d5e4_db42_485f_b0b4_496cc99162cb.slice. Mar 7 01:31:29.482425 sshd[5846]: pam_unix(sshd:session): session closed for user core Mar 7 01:31:29.534817 systemd[1]: sshd@84-10.0.0.31:22-10.0.0.1:40730.service: Deactivated successfully. Mar 7 01:31:29.540256 systemd[1]: session-85.scope: Deactivated successfully. Mar 7 01:31:29.542832 systemd-logind[1462]: Session 85 logged out. Waiting for processes to exit. Mar 7 01:31:29.553837 systemd[1]: Started sshd@85-10.0.0.31:22-10.0.0.1:40738.service - OpenSSH per-connection server daemon (10.0.0.1:40738). Mar 7 01:31:29.557801 systemd-logind[1462]: Removed session 85. Mar 7 01:31:29.647787 sshd[5858]: Accepted publickey for core from 10.0.0.1 port 40738 ssh2: RSA SHA256:CIVKEAA2usQRtTCYQu8FBM8BRm7mTHcz5eFpGV4bQ2E Mar 7 01:31:29.656393 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 01:31:29.689212 kubelet[2740]: E0307 01:31:29.684678 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:29.689796 containerd[1478]: time="2026-03-07T01:31:29.685860982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9fh28,Uid:a454d5e4-db42-485f-b0b4-496cc99162cb,Namespace:kube-system,Attempt:0,}" Mar 7 01:31:29.736171 systemd-logind[1462]: New session 86 of user core. Mar 7 01:31:29.798772 systemd[1]: Started session-86.scope - Session 86 of User core. Mar 7 01:31:29.909767 containerd[1478]: time="2026-03-07T01:31:29.908332827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 01:31:29.909767 containerd[1478]: time="2026-03-07T01:31:29.908435337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 01:31:29.909767 containerd[1478]: time="2026-03-07T01:31:29.908450625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:31:29.917356 containerd[1478]: time="2026-03-07T01:31:29.912854107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 01:31:30.029529 systemd[1]: Started cri-containerd-33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689.scope - libcontainer container 33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689. Mar 7 01:31:30.252597 containerd[1478]: time="2026-03-07T01:31:30.252389271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9fh28,Uid:a454d5e4-db42-485f-b0b4-496cc99162cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\"" Mar 7 01:31:30.258583 kubelet[2740]: E0307 01:31:30.258237 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:30.303330 containerd[1478]: time="2026-03-07T01:31:30.303048698Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 7 01:31:30.398800 containerd[1478]: time="2026-03-07T01:31:30.397615946Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f\"" Mar 7 01:31:30.401475 containerd[1478]: time="2026-03-07T01:31:30.400513208Z" level=info msg="StartContainer for \"651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f\"" Mar 7 01:31:30.573297 kubelet[2740]: E0307 01:31:30.572617 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:31:30.575673 systemd[1]: run-containerd-runc-k8s.io-651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f-runc.5qi6nl.mount: Deactivated successfully. Mar 7 01:31:30.615681 systemd[1]: Started cri-containerd-651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f.scope - libcontainer container 651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f. Mar 7 01:31:30.756517 containerd[1478]: time="2026-03-07T01:31:30.755542237Z" level=info msg="StartContainer for \"651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f\" returns successfully" Mar 7 01:31:30.824560 systemd[1]: cri-containerd-651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f.scope: Deactivated successfully. Mar 7 01:31:31.015791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f-rootfs.mount: Deactivated successfully. Mar 7 01:31:31.104661 containerd[1478]: time="2026-03-07T01:31:31.104274527Z" level=info msg="shim disconnected" id=651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f namespace=k8s.io Mar 7 01:31:31.104661 containerd[1478]: time="2026-03-07T01:31:31.104411663Z" level=warning msg="cleaning up after shim disconnected" id=651e1a81119d1f0ab3c174bb98f263c6f7c55b59c03d9ef43e960d019a00631f namespace=k8s.io Mar 7 01:31:31.104661 containerd[1478]: time="2026-03-07T01:31:31.104434807Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:31.728065 kubelet[2740]: E0307 01:31:31.726462 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:31.781336 containerd[1478]: time="2026-03-07T01:31:31.778681967Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 7 01:31:31.931297 containerd[1478]: time="2026-03-07T01:31:31.931245927Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46\"" Mar 7 01:31:31.935335 containerd[1478]: time="2026-03-07T01:31:31.934169879Z" level=info msg="StartContainer for \"cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46\"" Mar 7 01:31:32.031316 systemd[1]: Started cri-containerd-cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46.scope - libcontainer container cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46. Mar 7 01:31:32.275788 containerd[1478]: time="2026-03-07T01:31:32.268883422Z" level=info msg="StartContainer for \"cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46\" returns successfully" Mar 7 01:31:32.355636 systemd[1]: cri-containerd-cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46.scope: Deactivated successfully. Mar 7 01:31:32.589796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46-rootfs.mount: Deactivated successfully. Mar 7 01:31:32.651180 containerd[1478]: time="2026-03-07T01:31:32.649630295Z" level=info msg="shim disconnected" id=cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46 namespace=k8s.io Mar 7 01:31:32.651180 containerd[1478]: time="2026-03-07T01:31:32.649714262Z" level=warning msg="cleaning up after shim disconnected" id=cd7c22d0b9f1643ec4a1c00c1340afbec2c82b895c40ddb79823c73461fecc46 namespace=k8s.io Mar 7 01:31:32.651180 containerd[1478]: time="2026-03-07T01:31:32.649726054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:32.843683 kubelet[2740]: E0307 01:31:32.841162 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:32.899438 containerd[1478]: time="2026-03-07T01:31:32.898578184Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 7 01:31:33.099045 containerd[1478]: time="2026-03-07T01:31:33.098737455Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5\"" Mar 7 01:31:33.106159 containerd[1478]: time="2026-03-07T01:31:33.100506775Z" level=info msg="StartContainer for \"5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5\"" Mar 7 01:31:33.268865 systemd[1]: Started cri-containerd-5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5.scope - libcontainer container 5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5. Mar 7 01:31:33.685569 containerd[1478]: time="2026-03-07T01:31:33.685501666Z" level=info msg="StartContainer for \"5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5\" returns successfully" Mar 7 01:31:33.729848 systemd[1]: cri-containerd-5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5.scope: Deactivated successfully. Mar 7 01:31:33.881629 kubelet[2740]: E0307 01:31:33.879914 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:34.014914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5-rootfs.mount: Deactivated successfully. Mar 7 01:31:34.082679 containerd[1478]: time="2026-03-07T01:31:34.081744108Z" level=info msg="shim disconnected" id=5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5 namespace=k8s.io Mar 7 01:31:34.082679 containerd[1478]: time="2026-03-07T01:31:34.081869210Z" level=warning msg="cleaning up after shim disconnected" id=5a4da52cb44b613a3f6016caf974f65a76da0cfa0b2f45cc20fc27ed762791b5 namespace=k8s.io Mar 7 01:31:34.082679 containerd[1478]: time="2026-03-07T01:31:34.081888977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:34.908770 kubelet[2740]: E0307 01:31:34.908726 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:34.933182 containerd[1478]: time="2026-03-07T01:31:34.932871050Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 7 01:31:35.040078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3711839919.mount: Deactivated successfully. Mar 7 01:31:35.092782 containerd[1478]: time="2026-03-07T01:31:35.089506078Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa\"" Mar 7 01:31:35.092782 containerd[1478]: time="2026-03-07T01:31:35.091039848Z" level=info msg="StartContainer for \"e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa\"" Mar 7 01:31:35.252430 systemd[1]: Started cri-containerd-e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa.scope - libcontainer container e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa. Mar 7 01:31:35.435198 systemd[1]: cri-containerd-e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa.scope: Deactivated successfully. Mar 7 01:31:35.449245 containerd[1478]: time="2026-03-07T01:31:35.448218491Z" level=info msg="StartContainer for \"e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa\" returns successfully" Mar 7 01:31:35.588119 kubelet[2740]: E0307 01:31:35.578583 2740 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 7 01:31:35.708071 containerd[1478]: time="2026-03-07T01:31:35.704385031Z" level=info msg="shim disconnected" id=e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa namespace=k8s.io Mar 7 01:31:35.708071 containerd[1478]: time="2026-03-07T01:31:35.704461012Z" level=warning msg="cleaning up after shim disconnected" id=e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa namespace=k8s.io Mar 7 01:31:35.708071 containerd[1478]: time="2026-03-07T01:31:35.704477012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 01:31:35.931239 kubelet[2740]: E0307 01:31:35.929507 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:35.971840 containerd[1478]: time="2026-03-07T01:31:35.969749675Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 7 01:31:36.001833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e450d53337135bb0378612c6f4454b3ea073b5ea1b94deb954bdaf027185a9fa-rootfs.mount: Deactivated successfully. Mar 7 01:31:36.157395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3164372406.mount: Deactivated successfully. Mar 7 01:31:36.173665 containerd[1478]: time="2026-03-07T01:31:36.173526142Z" level=info msg="CreateContainer within sandbox \"33b77821dc24aea50a1a0c4b8accf7ad6dc1d19dddee7206534b0781ea2cb689\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8bbd4973b0da8ced78ae3c2ea6bb377d07d223922598a78f219d2c90e0b479da\"" Mar 7 01:31:36.180900 containerd[1478]: time="2026-03-07T01:31:36.179747492Z" level=info msg="StartContainer for \"8bbd4973b0da8ced78ae3c2ea6bb377d07d223922598a78f219d2c90e0b479da\"" Mar 7 01:31:36.388513 systemd[1]: Started cri-containerd-8bbd4973b0da8ced78ae3c2ea6bb377d07d223922598a78f219d2c90e0b479da.scope - libcontainer container 8bbd4973b0da8ced78ae3c2ea6bb377d07d223922598a78f219d2c90e0b479da. Mar 7 01:31:36.621155 containerd[1478]: time="2026-03-07T01:31:36.610791608Z" level=info msg="StartContainer for \"8bbd4973b0da8ced78ae3c2ea6bb377d07d223922598a78f219d2c90e0b479da\" returns successfully" Mar 7 01:31:37.973378 kubelet[2740]: E0307 01:31:37.973200 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:38.131218 kubelet[2740]: I0307 01:31:38.128786 2740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9fh28" podStartSLOduration=9.128681788 podStartE2EDuration="9.128681788s" podCreationTimestamp="2026-03-07 01:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 01:31:38.111260006 +0000 UTC m=+883.436022702" watchObservedRunningTime="2026-03-07 01:31:38.128681788 +0000 UTC m=+883.453444455" Mar 7 01:31:39.580098 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 7 01:31:39.677372 kubelet[2740]: E0307 01:31:39.677254 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:31:39.798072 kubelet[2740]: E0307 01:31:39.794455 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:03.412459 kubelet[2740]: E0307 01:32:03.360210 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:03.697271 kubelet[2740]: E0307 01:32:03.697077 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:03.706813 containerd[1478]: time="2026-03-07T01:32:03.706756238Z" level=info msg="StopPodSandbox for \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\"" Mar 7 01:32:03.707846 containerd[1478]: time="2026-03-07T01:32:03.707807697Z" level=info msg="TearDown network for sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" successfully" Mar 7 01:32:03.711161 containerd[1478]: time="2026-03-07T01:32:03.711119849Z" level=info msg="StopPodSandbox for \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" returns successfully" Mar 7 01:32:03.712897 containerd[1478]: time="2026-03-07T01:32:03.712854052Z" level=info msg="RemovePodSandbox for \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\"" Mar 7 01:32:03.713293 containerd[1478]: time="2026-03-07T01:32:03.713257633Z" level=info msg="Forcibly stopping sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\"" Mar 7 01:32:03.713498 containerd[1478]: time="2026-03-07T01:32:03.713471432Z" level=info msg="TearDown network for sandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" successfully" Mar 7 01:32:03.742314 containerd[1478]: time="2026-03-07T01:32:03.742248178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:32:03.745161 containerd[1478]: time="2026-03-07T01:32:03.742655669Z" level=info msg="RemovePodSandbox \"8ac02ce029f5409ebd98fc29ab3e37d65aba8544d3d37b37d0513ab433677ab6\" returns successfully" Mar 7 01:32:03.813525 containerd[1478]: time="2026-03-07T01:32:03.813453687Z" level=info msg="StopPodSandbox for \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\"" Mar 7 01:32:03.820734 containerd[1478]: time="2026-03-07T01:32:03.820382609Z" level=info msg="TearDown network for sandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" successfully" Mar 7 01:32:03.820734 containerd[1478]: time="2026-03-07T01:32:03.820462809Z" level=info msg="StopPodSandbox for \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" returns successfully" Mar 7 01:32:03.842123 containerd[1478]: time="2026-03-07T01:32:03.837162609Z" level=info msg="RemovePodSandbox for \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\"" Mar 7 01:32:03.842123 containerd[1478]: time="2026-03-07T01:32:03.837227210Z" level=info msg="Forcibly stopping sandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\"" Mar 7 01:32:03.842123 containerd[1478]: time="2026-03-07T01:32:03.837361080Z" level=info msg="TearDown network for sandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" successfully" Mar 7 01:32:03.903772 containerd[1478]: time="2026-03-07T01:32:03.903383925Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 01:32:03.903772 containerd[1478]: time="2026-03-07T01:32:03.903503448Z" level=info msg="RemovePodSandbox \"19242b35dba757eab2a47cdd404da35d99ec9aafb76a7e887b6d863a604da086\" returns successfully" Mar 7 01:32:07.314209 systemd-networkd[1397]: lxc_health: Link UP Mar 7 01:32:07.350802 systemd-networkd[1397]: lxc_health: Gained carrier Mar 7 01:32:07.693620 kubelet[2740]: E0307 01:32:07.693316 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:07.772093 kubelet[2740]: E0307 01:32:07.771848 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:07.795626 kubelet[2740]: E0307 01:32:07.795571 2740 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 7 01:32:09.151679 systemd[1]: run-containerd-runc-k8s.io-8bbd4973b0da8ced78ae3c2ea6bb377d07d223922598a78f219d2c90e0b479da-runc.3bvthC.mount: Deactivated successfully. Mar 7 01:32:09.330508 systemd-networkd[1397]: lxc_health: Gained IPv6LL Mar 7 01:32:19.606744 sshd[5858]: pam_unix(sshd:session): session closed for user core Mar 7 01:32:19.633911 systemd[1]: sshd@85-10.0.0.31:22-10.0.0.1:40738.service: Deactivated successfully. Mar 7 01:32:19.654309 systemd[1]: session-86.scope: Deactivated successfully. Mar 7 01:32:19.657155 systemd[1]: session-86.scope: Consumed 2.681s CPU time. Mar 7 01:32:19.683184 systemd-logind[1462]: Session 86 logged out. Waiting for processes to exit. Mar 7 01:32:19.695330 systemd-logind[1462]: Removed session 86.