Mar 10 01:31:19.933581 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 22:55:40 -00 2026 Mar 10 01:31:19.933621 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:31:19.933641 kernel: BIOS-provided physical RAM map: Mar 10 01:31:19.933652 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:31:19.933662 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:31:19.933673 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:31:19.933684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 01:31:19.933695 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 01:31:19.933705 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:31:19.933722 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:31:19.933733 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:31:19.933743 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:31:19.933783 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 01:31:19.933796 kernel: NX (Execute Disable) protection: active Mar 10 01:31:19.933808 kernel: APIC: Static calls initialized Mar 10 01:31:19.933871 kernel: SMBIOS 2.8 present. Mar 10 01:31:19.933887 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 01:31:19.933899 kernel: Hypervisor detected: KVM Mar 10 01:31:19.933910 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:31:19.933921 kernel: kvm-clock: using sched offset of 43509279610 cycles Mar 10 01:31:19.933933 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:31:19.933943 kernel: tsc: Detected 2445.424 MHz processor Mar 10 01:31:19.933956 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:31:19.933968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:31:19.933987 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:31:19.933999 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:31:19.934011 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:31:19.934023 kernel: Using GB pages for direct mapping Mar 10 01:31:19.934034 kernel: ACPI: Early table checksum verification disabled Mar 10 01:31:19.934045 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 01:31:19.934057 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934070 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934082 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934098 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 01:31:19.934110 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934122 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934134 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934146 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:31:19.934156 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 01:31:19.934168 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 01:31:19.934188 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 01:31:19.934206 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 01:31:19.934218 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 01:31:19.934230 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 01:31:19.934243 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 01:31:19.934316 kernel: No NUMA configuration found Mar 10 01:31:19.934331 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 01:31:19.934350 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 10 01:31:19.934363 kernel: Zone ranges: Mar 10 01:31:19.934375 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:31:19.934387 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 01:31:19.934398 kernel: Normal empty Mar 10 01:31:19.934410 kernel: Movable zone start for each node Mar 10 01:31:19.934423 kernel: Early memory node ranges Mar 10 01:31:19.934434 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:31:19.934477 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 01:31:19.934490 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 01:31:19.934510 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:31:19.934549 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:31:19.934564 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 01:31:19.934576 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:31:19.934588 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:31:19.934599 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:31:19.934611 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:31:19.934624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:31:19.934636 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:31:19.934653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:31:19.934666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:31:19.934679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:31:19.934690 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:31:19.934702 kernel: TSC deadline timer available Mar 10 01:31:19.934714 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 10 01:31:19.934725 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:31:19.934738 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:31:19.934775 kernel: kvm-guest: setup PV sched yield Mar 10 01:31:19.934797 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:31:19.934809 kernel: Booting paravirtualized kernel on KVM Mar 10 01:31:19.934821 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:31:19.934833 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:31:19.934845 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 10 01:31:19.934858 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 10 01:31:19.934870 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:31:19.934881 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:31:19.934893 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:31:19.934914 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:31:19.934926 kernel: random: crng init done Mar 10 01:31:19.934939 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:31:19.934950 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:31:19.934963 kernel: Fallback order for Node 0: 0 Mar 10 01:31:19.934975 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 10 01:31:19.934987 kernel: Policy zone: DMA32 Mar 10 01:31:19.934999 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:31:19.935018 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 136884K reserved, 0K cma-reserved) Mar 10 01:31:19.935030 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:31:19.935042 kernel: ftrace: allocating 37996 entries in 149 pages Mar 10 01:31:19.935054 kernel: ftrace: allocated 149 pages with 4 groups Mar 10 01:31:19.935066 kernel: Dynamic Preempt: voluntary Mar 10 01:31:19.935078 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:31:19.935092 kernel: rcu: RCU event tracing is enabled. Mar 10 01:31:19.935105 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:31:19.935117 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:31:19.935135 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:31:19.935148 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:31:19.935159 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:31:19.935172 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:31:19.935208 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:31:19.935221 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:31:19.935233 kernel: Console: colour VGA+ 80x25 Mar 10 01:31:19.935246 kernel: printk: console [ttyS0] enabled Mar 10 01:31:19.935312 kernel: ACPI: Core revision 20230628 Mar 10 01:31:19.935333 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:31:19.935345 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:31:19.935356 kernel: x2apic enabled Mar 10 01:31:19.935368 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:31:19.935380 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:31:19.935393 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:31:19.935405 kernel: kvm-guest: setup PV IPIs Mar 10 01:31:19.935417 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:31:19.935483 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 10 01:31:19.935497 kernel: Calibrating delay loop (skipped) preset value.. 4890.84 BogoMIPS (lpj=2445424) Mar 10 01:31:19.935511 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:31:19.935523 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:31:19.935541 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:31:19.935554 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:31:19.935568 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:31:19.935581 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:31:19.935598 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:31:19.935610 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:31:19.935650 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:31:19.935664 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:31:19.935676 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:31:19.935690 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:31:19.935702 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:31:19.935714 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:31:19.935727 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:31:19.935747 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:31:19.935759 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:31:19.935771 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:31:19.935784 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:31:19.935796 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:31:19.935809 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 10 01:31:19.935821 kernel: landlock: Up and running. Mar 10 01:31:19.935833 kernel: SELinux: Initializing. Mar 10 01:31:19.935846 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:31:19.935866 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:31:19.935879 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:31:19.935892 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:31:19.935904 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:31:19.935918 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:31:19.935931 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:31:19.935942 kernel: signal: max sigframe size: 1776 Mar 10 01:31:19.935983 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:31:19.936003 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:31:19.936017 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:31:19.936029 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:31:19.936041 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:31:19.936055 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:31:19.936066 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:31:19.936079 kernel: smpboot: Max logical packages: 1 Mar 10 01:31:19.936093 kernel: smpboot: Total of 4 processors activated (19563.39 BogoMIPS) Mar 10 01:31:19.936106 kernel: devtmpfs: initialized Mar 10 01:31:19.936118 kernel: x86/mm: Memory block size: 128MB Mar 10 01:31:19.936137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:31:19.936150 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:31:19.936163 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:31:19.936175 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:31:19.936188 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:31:19.936202 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:31:19.936215 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:31:19.936227 kernel: audit: type=2000 audit(1773106273.420:1): state=initialized audit_enabled=0 res=1 Mar 10 01:31:19.936239 kernel: cpuidle: using governor menu Mar 10 01:31:19.936310 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:31:19.936325 kernel: dca service started, version 1.12.1 Mar 10 01:31:19.936338 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 10 01:31:19.936350 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:31:19.936363 kernel: PCI: Using configuration type 1 for base access Mar 10 01:31:19.936377 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:31:19.936390 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:31:19.936403 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:31:19.936423 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:31:19.936437 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:31:19.936484 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:31:19.936497 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:31:19.936509 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:31:19.936523 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:31:19.936536 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 10 01:31:19.936548 kernel: ACPI: Interpreter enabled Mar 10 01:31:19.936560 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:31:19.936573 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:31:19.936593 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:31:19.936605 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:31:19.936618 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:31:19.936630 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:31:19.937176 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:31:19.937629 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:31:19.937872 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:31:19.937900 kernel: PCI host bridge to bus 0000:00 Mar 10 01:31:19.938350 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:31:19.938622 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:31:19.938836 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:31:19.939121 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:31:19.939385 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:31:19.939642 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 01:31:19.939874 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:31:19.940365 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 10 01:31:19.940764 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 10 01:31:19.940981 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 10 01:31:19.941410 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 10 01:31:19.941680 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 10 01:31:19.941926 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:31:19.942318 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 10 01:31:19.942596 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 10 01:31:19.942832 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 10 01:31:19.943166 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 01:31:19.943569 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 10 01:31:19.943840 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 10 01:31:19.944096 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 10 01:31:19.944390 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 01:31:19.944806 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 10 01:31:19.944966 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 10 01:31:19.945114 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 10 01:31:19.945316 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 01:31:19.945543 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 10 01:31:19.945794 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 10 01:31:19.946088 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:31:19.946357 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 10 01:31:19.946552 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 10 01:31:19.946777 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 10 01:31:19.947132 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 10 01:31:19.947408 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 10 01:31:19.947436 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:31:19.947491 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:31:19.947501 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:31:19.947512 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:31:19.947524 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:31:19.947536 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:31:19.947550 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:31:19.947562 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:31:19.947580 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:31:19.947591 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:31:19.947604 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:31:19.947616 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:31:19.947629 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:31:19.947640 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:31:19.947651 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:31:19.947663 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:31:19.947673 kernel: iommu: Default domain type: Translated Mar 10 01:31:19.947694 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:31:19.947705 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:31:19.947718 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:31:19.947730 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:31:19.947740 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 01:31:19.947991 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:31:19.948241 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:31:19.948575 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:31:19.948595 kernel: vgaarb: loaded Mar 10 01:31:19.948612 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:31:19.948623 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:31:19.948636 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:31:19.948649 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:31:19.948659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:31:19.948669 kernel: pnp: PnP ACPI init Mar 10 01:31:19.949110 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:31:19.949133 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:31:19.949154 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:31:19.949171 kernel: NET: Registered PF_INET protocol family Mar 10 01:31:19.949184 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:31:19.949195 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:31:19.949207 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:31:19.949219 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:31:19.949233 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:31:19.949243 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:31:19.949311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:31:19.949331 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:31:19.949342 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:31:19.949354 kernel: NET: Registered PF_XDP protocol family Mar 10 01:31:19.949611 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:31:19.949818 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:31:19.950019 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:31:19.950213 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:31:19.950518 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:31:19.950738 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 01:31:19.950757 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:31:19.950768 kernel: Initialise system trusted keyrings Mar 10 01:31:19.950778 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:31:19.950790 kernel: Key type asymmetric registered Mar 10 01:31:19.950802 kernel: Asymmetric key parser 'x509' registered Mar 10 01:31:19.950815 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 10 01:31:19.950828 kernel: io scheduler mq-deadline registered Mar 10 01:31:19.950840 kernel: io scheduler kyber registered Mar 10 01:31:19.950856 kernel: io scheduler bfq registered Mar 10 01:31:19.950866 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:31:19.950879 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:31:19.950891 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:31:19.950903 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:31:19.950916 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:31:19.950928 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:31:19.950942 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:31:19.950955 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:31:19.950972 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:31:19.951360 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:31:19.951380 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:31:19.951609 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:31:19.951810 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:31:18 UTC (1773106278) Mar 10 01:31:19.952014 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:31:19.952035 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:31:19.952046 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:31:19.952063 kernel: Segment Routing with IPv6 Mar 10 01:31:19.952076 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:31:19.952086 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:31:19.952096 kernel: Key type dns_resolver registered Mar 10 01:31:19.952106 kernel: IPI shorthand broadcast: enabled Mar 10 01:31:19.952118 kernel: sched_clock: Marking stable (5301032430, 1851115845)->(8436690978, -1284542703) Mar 10 01:31:19.952130 kernel: registered taskstats version 1 Mar 10 01:31:19.952143 kernel: Loading compiled-in X.509 certificates Mar 10 01:31:19.952155 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 611e035accba842cc9fafb5ced2ca41a603067aa' Mar 10 01:31:19.952174 kernel: Key type .fscrypt registered Mar 10 01:31:19.952186 kernel: Key type fscrypt-provisioning registered Mar 10 01:31:19.952199 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:31:19.952211 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:31:19.952223 kernel: ima: No architecture policies found Mar 10 01:31:19.952236 kernel: clk: Disabling unused clocks Mar 10 01:31:19.952247 kernel: Freeing unused kernel image (initmem) memory: 42896K Mar 10 01:31:19.952338 kernel: Write protecting the kernel read-only data: 36864k Mar 10 01:31:19.952349 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 10 01:31:19.952366 kernel: Run /init as init process Mar 10 01:31:19.952378 kernel: with arguments: Mar 10 01:31:19.952392 kernel: /init Mar 10 01:31:19.952404 kernel: with environment: Mar 10 01:31:19.952415 kernel: HOME=/ Mar 10 01:31:19.952427 kernel: TERM=linux Mar 10 01:31:19.952474 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:31:19.952490 systemd[1]: Detected virtualization kvm. Mar 10 01:31:19.952509 systemd[1]: Detected architecture x86-64. Mar 10 01:31:19.952521 systemd[1]: Running in initrd. Mar 10 01:31:19.952532 systemd[1]: No hostname configured, using default hostname. Mar 10 01:31:19.952544 systemd[1]: Hostname set to . Mar 10 01:31:19.952556 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:31:19.952568 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:31:19.952580 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:31:19.952592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:31:19.952609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:31:19.952621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:31:19.952633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:31:19.952645 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:31:19.952660 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:31:19.952673 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:31:19.952688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:31:19.952701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:31:19.952713 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:31:19.952725 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:31:19.952738 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:31:19.952769 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:31:19.952786 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:31:19.952801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:31:19.952814 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:31:19.952827 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 10 01:31:19.952840 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:31:19.952853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:31:19.952865 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:31:19.952878 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:31:19.952891 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:31:19.952908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:31:19.952924 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:31:19.952938 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:31:19.952950 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:31:19.952963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:31:19.952976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:31:19.952988 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:31:19.953037 systemd-journald[194]: Collecting audit messages is disabled. Mar 10 01:31:19.953072 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:31:19.953085 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:31:19.953102 systemd-journald[194]: Journal started Mar 10 01:31:19.953126 systemd-journald[194]: Runtime Journal (/run/log/journal/ab1db9eadda54148895bb12eb4fe08c9) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:31:19.959328 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:31:19.928907 systemd-modules-load[195]: Inserted module 'overlay' Mar 10 01:31:19.979302 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:31:19.980828 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:31:19.991349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:31:19.998518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:31:20.042202 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:31:20.042847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:31:20.232888 kernel: Bridge firewalling registered Mar 10 01:31:20.047351 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 10 01:31:20.236570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:31:20.237438 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:31:20.260337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:31:20.304691 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:31:20.366166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:31:20.479350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:31:20.503158 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:31:20.553082 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:31:20.558642 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:31:20.638559 dracut-cmdline[233]: dracut-dracut-053 Mar 10 01:31:20.650027 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2de2345ba8612ade61882513e7d9ebf4aad52996b6d7f4c567d9970e886b17cc Mar 10 01:31:20.665861 systemd-resolved[231]: Positive Trust Anchors: Mar 10 01:31:20.665874 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:31:20.665919 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:31:20.670202 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 10 01:31:20.675018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:31:20.688685 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:31:20.945530 kernel: SCSI subsystem initialized Mar 10 01:31:20.960152 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:31:20.983484 kernel: iscsi: registered transport (tcp) Mar 10 01:31:21.030215 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:31:21.030352 kernel: QLogic iSCSI HBA Driver Mar 10 01:31:21.132515 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:31:21.154626 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:31:21.196712 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:31:21.196803 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:31:21.196824 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 10 01:31:21.283481 kernel: raid6: avx2x4 gen() 24250 MB/s Mar 10 01:31:21.302537 kernel: raid6: avx2x2 gen() 22945 MB/s Mar 10 01:31:21.322303 kernel: raid6: avx2x1 gen() 19911 MB/s Mar 10 01:31:21.322387 kernel: raid6: using algorithm avx2x4 gen() 24250 MB/s Mar 10 01:31:21.342903 kernel: raid6: .... xor() 4916 MB/s, rmw enabled Mar 10 01:31:21.343005 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:31:21.369382 kernel: xor: automatically using best checksumming function avx Mar 10 01:31:21.698563 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:31:21.723680 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:31:21.753643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:31:21.779752 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 10 01:31:21.790333 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:31:21.809148 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:31:21.846598 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Mar 10 01:31:21.918994 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:31:21.945834 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:31:22.121223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:31:22.147782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:31:22.173678 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:31:22.198568 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:31:22.209178 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:31:22.225926 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:31:22.253953 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:31:22.313924 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:31:22.356381 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:31:22.362522 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:31:22.372823 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:31:22.383633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:31:22.383911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:31:22.388133 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:31:22.427923 kernel: libata version 3.00 loaded. Mar 10 01:31:22.436474 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:31:22.439980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:31:22.465715 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:31:22.465883 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:31:22.466820 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:31:22.466851 kernel: GPT:9289727 != 19775487 Mar 10 01:31:22.466866 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:31:22.480492 kernel: GPT:9289727 != 19775487 Mar 10 01:31:22.480627 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:31:22.480644 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:31:22.528229 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:31:22.528791 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:31:22.559186 kernel: AVX2 version of gcm_enc/dec engaged. Mar 10 01:31:22.559805 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 10 01:31:22.560776 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:31:22.576324 kernel: AES CTR mode by8 optimization enabled Mar 10 01:31:22.579511 kernel: scsi host0: ahci Mar 10 01:31:22.587068 kernel: scsi host1: ahci Mar 10 01:31:22.590322 kernel: scsi host2: ahci Mar 10 01:31:22.590671 kernel: scsi host3: ahci Mar 10 01:31:22.592314 kernel: scsi host4: ahci Mar 10 01:31:22.593311 kernel: scsi host5: ahci Mar 10 01:31:22.593626 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 10 01:31:22.593646 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 10 01:31:22.593663 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 10 01:31:22.593678 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 10 01:31:22.593693 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 10 01:31:22.593709 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 10 01:31:22.649419 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (484) Mar 10 01:31:22.654358 kernel: BTRFS: device fsid a7ce059b-f34b-4785-93b9-44632d452486 devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (463) Mar 10 01:31:22.672936 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:31:22.898884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:31:22.960344 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:31:22.960381 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:31:22.960392 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:31:22.960402 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:31:22.960412 kernel: ata3.00: applying bridge limits Mar 10 01:31:22.960422 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:31:22.960432 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:31:22.960473 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:31:22.960494 kernel: ata3.00: configured for UDMA/100 Mar 10 01:31:22.960515 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:31:22.915431 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:31:22.987905 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:31:23.005328 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:31:23.032111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:31:23.064320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:31:23.088576 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:31:23.138824 disk-uuid[569]: Primary Header is updated. Mar 10 01:31:23.138824 disk-uuid[569]: Secondary Entries is updated. Mar 10 01:31:23.138824 disk-uuid[569]: Secondary Header is updated. Mar 10 01:31:23.162484 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:31:23.163064 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:31:23.163085 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:31:23.181400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:31:23.215419 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:31:23.216032 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:31:24.259413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:31:24.264068 disk-uuid[570]: The operation has completed successfully. Mar 10 01:31:24.392029 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:31:24.392388 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:31:24.431979 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:31:24.447237 sh[596]: Success Mar 10 01:31:24.508771 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 10 01:31:24.666125 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:31:24.691510 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:31:24.745870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:31:24.763941 kernel: BTRFS info (device dm-0): first mount of filesystem a7ce059b-f34b-4785-93b9-44632d452486 Mar 10 01:31:24.763977 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:31:24.763995 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 10 01:31:24.764012 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 10 01:31:24.764044 kernel: BTRFS info (device dm-0): using free space tree Mar 10 01:31:24.815563 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:31:24.841777 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:31:24.867228 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:31:24.901963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:31:24.965116 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:31:24.965491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:31:24.965570 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:31:25.007620 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:31:25.159022 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 10 01:31:25.171358 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:31:25.231109 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:31:25.283597 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:31:26.009027 ignition[683]: Ignition 2.19.0 Mar 10 01:31:26.009439 ignition[683]: Stage: fetch-offline Mar 10 01:31:26.011055 ignition[683]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:31:26.011075 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:31:26.011213 ignition[683]: parsed url from cmdline: "" Mar 10 01:31:26.011220 ignition[683]: no config URL provided Mar 10 01:31:26.011228 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:31:26.011244 ignition[683]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:31:26.011518 ignition[683]: op(1): [started] loading QEMU firmware config module Mar 10 01:31:26.011528 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:31:26.086161 ignition[683]: op(1): [finished] loading QEMU firmware config module Mar 10 01:31:26.613819 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:31:26.618852 ignition[683]: parsing config with SHA512: 6d150d8c172deb0874d8f64be77ba5eda77a99f345e6569dcef581a4ed5c83df78b3499624edb21996bd0473ed7b5e34d15d2d2b51744330309bb834e54027bb Mar 10 01:31:26.641839 unknown[683]: fetched base config from "system" Mar 10 01:31:26.641888 unknown[683]: fetched user config from "qemu" Mar 10 01:31:26.642736 ignition[683]: fetch-offline: fetch-offline passed Mar 10 01:31:26.642897 ignition[683]: Ignition finished successfully Mar 10 01:31:26.677636 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:31:26.682526 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:31:26.734035 systemd-networkd[784]: lo: Link UP Mar 10 01:31:26.734745 systemd-networkd[784]: lo: Gained carrier Mar 10 01:31:26.738933 systemd-networkd[784]: Enumeration completed Mar 10 01:31:26.739133 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:31:26.740414 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:31:26.740420 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:31:26.766559 systemd-networkd[784]: eth0: Link UP Mar 10 01:31:26.766566 systemd-networkd[784]: eth0: Gained carrier Mar 10 01:31:26.766581 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:31:26.768687 systemd[1]: Reached target network.target - Network. Mar 10 01:31:26.771073 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:31:26.864722 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:31:26.899960 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:31:27.041009 ignition[787]: Ignition 2.19.0 Mar 10 01:31:27.043950 ignition[787]: Stage: kargs Mar 10 01:31:27.044569 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:31:27.044592 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:31:27.071387 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:31:27.046044 ignition[787]: kargs: kargs passed Mar 10 01:31:27.046136 ignition[787]: Ignition finished successfully Mar 10 01:31:27.113880 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:31:27.275605 ignition[796]: Ignition 2.19.0 Mar 10 01:31:27.275620 ignition[796]: Stage: disks Mar 10 01:31:27.283835 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:31:27.275884 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:31:27.284810 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:31:27.275904 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:31:27.298545 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:31:27.277083 ignition[796]: disks: disks passed Mar 10 01:31:27.306957 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:31:27.277157 ignition[796]: Ignition finished successfully Mar 10 01:31:27.310959 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:31:27.311077 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:31:27.348789 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:31:27.457840 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 10 01:31:27.475730 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:31:27.504593 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:31:28.091846 kernel: EXT4-fs (vda9): mounted filesystem 8ab7565f-94b4-4514-a19e-abd5bcc78da1 r/w with ordered data mode. Quota mode: none. Mar 10 01:31:28.105690 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:31:28.133787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:31:28.169648 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:31:28.174587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:31:28.190429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:31:28.190730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:31:28.190855 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:31:28.205596 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:31:28.271839 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:31:28.304650 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Mar 10 01:31:28.314482 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:31:28.314616 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:31:28.314641 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:31:28.357202 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:31:28.360860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:31:28.472178 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:31:28.496864 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:31:28.528708 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:31:28.547153 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:31:28.726434 systemd-networkd[784]: eth0: Gained IPv6LL Mar 10 01:31:29.361401 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:31:29.394706 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:31:29.433869 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:31:29.477435 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:31:29.489490 kernel: BTRFS info (device vda6): last unmount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:31:29.542136 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:31:29.653200 ignition[928]: INFO : Ignition 2.19.0 Mar 10 01:31:29.653200 ignition[928]: INFO : Stage: mount Mar 10 01:31:29.653200 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:31:29.653200 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:31:29.676129 ignition[928]: INFO : mount: mount passed Mar 10 01:31:29.676129 ignition[928]: INFO : Ignition finished successfully Mar 10 01:31:29.681752 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:31:29.706405 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:31:29.743620 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:31:29.781933 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 10 01:31:29.782021 kernel: BTRFS info (device vda6): first mount of filesystem 3e73d814-00c9-411d-8220-21b9b3666124 Mar 10 01:31:29.789338 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:31:29.791918 kernel: BTRFS info (device vda6): using free space tree Mar 10 01:31:29.809510 kernel: BTRFS info (device vda6): auto enabling async discard Mar 10 01:31:29.816184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:31:29.933150 ignition[959]: INFO : Ignition 2.19.0 Mar 10 01:31:29.933150 ignition[959]: INFO : Stage: files Mar 10 01:31:29.933150 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:31:29.933150 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:31:29.953541 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:31:29.953541 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:31:29.953541 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:31:29.953541 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:31:29.953541 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:31:29.982492 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:31:29.982492 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:31:29.982492 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:31:29.955477 unknown[959]: wrote ssh authorized keys file for user: core Mar 10 01:31:30.078071 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:31:30.491815 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:31:30.491815 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:31:30.509147 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 10 01:31:30.854744 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 10 01:31:31.269020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 10 01:31:31.269020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:31:31.285219 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 10 01:31:31.625497 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 10 01:31:35.445920 kernel: hrtimer: interrupt took 12013834 ns Mar 10 01:31:35.863322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 10 01:31:35.863322 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 10 01:31:35.910529 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:31:36.143797 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:31:36.166722 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:31:36.173697 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:31:36.173697 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:31:36.184838 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:31:36.192928 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:31:36.199071 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:31:36.204620 ignition[959]: INFO : files: files passed Mar 10 01:31:36.207207 ignition[959]: INFO : Ignition finished successfully Mar 10 01:31:36.213038 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:31:36.244736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:31:36.253564 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:31:36.262741 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:31:36.262898 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:31:36.279733 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:31:36.285568 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:31:36.285568 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:31:36.296470 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:31:36.293115 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:31:36.306652 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:31:36.326601 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:31:36.395910 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:31:36.400226 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:31:36.411003 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:31:36.430868 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:31:36.438665 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:31:36.455687 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:31:36.484365 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:31:36.505401 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:31:36.534327 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:31:36.539875 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:31:36.543801 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:31:36.547544 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:31:36.547832 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:31:36.558783 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:31:36.575480 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:31:36.579697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:31:36.588059 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:31:36.601740 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:31:36.615727 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:31:36.625733 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:31:36.625966 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:31:36.626168 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:31:36.626381 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:31:36.626536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:31:36.626798 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:31:36.628039 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:31:36.633932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:31:36.636852 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:31:36.641147 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:31:36.644403 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:31:36.647679 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:31:36.649102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:31:36.649344 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:31:36.649896 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:31:36.650428 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:31:36.662070 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:31:36.670729 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:31:36.672797 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:31:36.674366 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:31:36.674577 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:31:36.683920 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:31:36.692544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:31:36.700526 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:31:36.700947 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:31:36.718306 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:31:37.067035 ignition[1015]: INFO : Ignition 2.19.0 Mar 10 01:31:37.067035 ignition[1015]: INFO : Stage: umount Mar 10 01:31:37.067035 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:31:37.067035 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:31:37.067035 ignition[1015]: INFO : umount: umount passed Mar 10 01:31:37.067035 ignition[1015]: INFO : Ignition finished successfully Mar 10 01:31:36.719046 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:31:36.841822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:31:36.848249 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:31:36.854983 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:31:36.900389 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:31:36.907800 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:31:36.908202 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:31:36.925100 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:31:36.925561 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:31:37.008349 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:31:37.009364 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:31:37.009587 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:31:37.044235 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:31:37.044582 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:31:37.067211 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:31:37.067882 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:31:37.076019 systemd[1]: Stopped target network.target - Network. Mar 10 01:31:37.076121 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:31:37.076232 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:31:37.076418 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:31:37.076531 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:31:37.076670 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:31:37.076769 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:31:37.076867 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:31:37.076938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:31:37.077034 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:31:37.077101 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:31:37.079985 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:31:37.081676 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:31:37.114860 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:31:37.115158 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:31:37.154576 systemd-networkd[784]: eth0: DHCPv6 lease lost Mar 10 01:31:37.199469 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:31:37.249966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:31:37.290125 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:31:37.310376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:31:37.395009 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:31:37.406201 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:31:37.406742 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:31:37.435793 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:31:37.442213 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:31:37.457125 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:31:37.457611 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:31:37.466892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:31:37.467101 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:31:37.484624 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:31:37.511110 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:31:37.511764 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:31:37.536767 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:31:37.536984 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:31:37.544913 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:31:37.545013 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:31:37.550979 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:31:37.551082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:31:37.559384 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:31:37.559552 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:31:37.568482 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:31:37.568640 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:31:37.575640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:31:37.575736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:31:37.614485 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:31:37.642638 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:31:37.642754 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:31:37.654320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:31:37.654413 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:31:37.662324 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:31:37.662540 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:31:37.669743 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:31:37.715711 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:31:37.753810 systemd[1]: Switching root. Mar 10 01:31:37.793188 systemd-journald[194]: Journal stopped Mar 10 01:31:41.515559 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 10 01:31:41.515646 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:31:41.515672 kernel: SELinux: policy capability open_perms=1 Mar 10 01:31:41.515689 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:31:41.515704 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:31:41.515719 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:31:41.515741 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:31:41.515912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:31:41.515929 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:31:41.515944 kernel: audit: type=1403 audit(1773106298.127:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:31:41.515969 systemd[1]: Successfully loaded SELinux policy in 120.073ms. Mar 10 01:31:41.516001 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.696ms. Mar 10 01:31:41.516019 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 10 01:31:41.516036 systemd[1]: Detected virtualization kvm. Mar 10 01:31:41.516053 systemd[1]: Detected architecture x86-64. Mar 10 01:31:41.516201 systemd[1]: Detected first boot. Mar 10 01:31:41.516224 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:31:41.516250 zram_generator::config[1058]: No configuration found. Mar 10 01:31:41.516561 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:31:41.516582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:31:41.516600 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:31:41.516617 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:31:41.516635 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:31:41.516659 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:31:41.516676 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:31:41.516692 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:31:41.516709 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:31:41.516727 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:31:41.516900 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:31:41.516918 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:31:41.516934 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:31:41.516956 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:31:41.516973 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:31:41.516990 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:31:41.517006 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:31:41.517024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:31:41.517040 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:31:41.517057 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:31:41.517077 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:31:41.517103 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:31:41.517228 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:31:41.517249 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:31:41.517401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:31:41.517419 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:31:41.517436 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:31:41.517578 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:31:41.517596 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:31:41.517614 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:31:41.517637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:31:41.517654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:31:41.517670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:31:41.517902 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:31:41.517920 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:31:41.517936 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:31:41.517952 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:31:41.517968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:41.517985 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:31:41.518123 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:31:41.518142 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:31:41.518159 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:31:41.518175 systemd[1]: Reached target machines.target - Containers. Mar 10 01:31:41.518194 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:31:41.518211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:31:41.518230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:31:41.518247 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:31:41.518610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:31:41.518630 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:31:41.518646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:31:41.518663 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:31:41.518680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:31:41.518697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:31:41.518714 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:31:41.518884 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:31:41.518901 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:31:41.518923 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:31:41.518939 kernel: fuse: init (API version 7.39) Mar 10 01:31:41.518956 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:31:41.518972 kernel: loop: module loaded Mar 10 01:31:41.518988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:31:41.519005 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:31:41.519022 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:31:41.519037 kernel: ACPI: bus type drm_connector registered Mar 10 01:31:41.519054 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:31:41.519079 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:31:41.519096 systemd[1]: Stopped verity-setup.service. Mar 10 01:31:41.519155 systemd-journald[1142]: Collecting audit messages is disabled. Mar 10 01:31:41.519191 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:41.519208 systemd-journald[1142]: Journal started Mar 10 01:31:41.519235 systemd-journald[1142]: Runtime Journal (/run/log/journal/ab1db9eadda54148895bb12eb4fe08c9) is 6.0M, max 48.4M, 42.3M free. Mar 10 01:31:40.448697 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:31:40.502411 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:31:40.503807 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:31:40.504414 systemd[1]: systemd-journald.service: Consumed 2.364s CPU time. Mar 10 01:31:41.584330 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:31:41.597924 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:31:41.648818 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:31:41.682763 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:31:41.705660 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:31:41.713977 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:31:41.729050 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:31:41.745180 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:31:41.755226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:31:41.767937 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:31:41.768310 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:31:41.781510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:31:41.784211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:31:41.794135 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:31:41.794484 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:31:41.809839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:31:41.810133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:31:41.829946 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:31:41.830317 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:31:41.843861 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:31:41.844192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:31:41.856398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:31:41.877156 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:31:41.937027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:31:42.090378 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:31:42.130095 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:31:42.154566 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:31:42.169129 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:31:42.171349 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:31:42.193412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 10 01:31:42.259007 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:31:42.288751 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:31:42.298330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:31:42.347177 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:31:42.372970 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:31:42.381369 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:31:42.400410 systemd-journald[1142]: Time spent on flushing to /var/log/journal/ab1db9eadda54148895bb12eb4fe08c9 is 66.495ms for 941 entries. Mar 10 01:31:42.400410 systemd-journald[1142]: System Journal (/var/log/journal/ab1db9eadda54148895bb12eb4fe08c9) is 8.0M, max 195.6M, 187.6M free. Mar 10 01:31:42.601312 systemd-journald[1142]: Received client request to flush runtime journal. Mar 10 01:31:42.601391 kernel: loop0: detected capacity change from 0 to 140768 Mar 10 01:31:42.401766 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:31:42.473122 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:31:42.493247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:31:42.504681 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:31:42.548656 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:31:42.567686 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:31:42.586728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:31:42.594502 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:31:42.607115 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:31:42.614845 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:31:42.639616 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:31:42.658169 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:31:42.679628 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 10 01:31:42.709959 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 10 01:31:42.732913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:31:42.739440 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:31:42.769923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:31:42.787766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:31:42.793594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:31:42.794907 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 10 01:31:42.822045 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 10 01:31:42.850882 kernel: loop1: detected capacity change from 0 to 228704 Mar 10 01:31:42.893683 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 10 01:31:42.893713 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Mar 10 01:31:42.914783 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:31:43.218611 kernel: loop2: detected capacity change from 0 to 142488 Mar 10 01:31:43.474939 kernel: loop3: detected capacity change from 0 to 140768 Mar 10 01:31:43.581123 kernel: loop4: detected capacity change from 0 to 228704 Mar 10 01:31:43.996511 kernel: loop5: detected capacity change from 0 to 142488 Mar 10 01:31:44.084096 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:31:44.088090 (sd-merge)[1200]: Merged extensions into '/usr'. Mar 10 01:31:44.095946 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:31:44.095983 systemd[1]: Reloading... Mar 10 01:31:44.486616 zram_generator::config[1229]: No configuration found. Mar 10 01:31:44.949964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:31:45.203126 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:31:45.255088 systemd[1]: Reloading finished in 1158 ms. Mar 10 01:31:45.336177 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:31:45.341246 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:31:45.370961 systemd[1]: Starting ensure-sysext.service... Mar 10 01:31:45.382966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:31:45.401452 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:31:45.418803 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:31:45.454424 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:31:45.456822 systemd[1]: Reloading... Mar 10 01:31:46.099004 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:31:46.099884 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:31:46.102551 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:31:46.104461 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 10 01:31:46.104650 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 10 01:31:46.158566 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:31:46.158592 systemd-tmpfiles[1264]: Skipping /boot Mar 10 01:31:46.658731 systemd-udevd[1266]: Using default interface naming scheme 'v255'. Mar 10 01:31:46.667986 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:31:46.668428 systemd-tmpfiles[1264]: Skipping /boot Mar 10 01:31:46.740579 zram_generator::config[1292]: No configuration found. Mar 10 01:31:46.999365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1332) Mar 10 01:31:47.197200 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 10 01:31:47.205687 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:31:47.188089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:31:47.267583 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:31:47.275130 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 10 01:31:47.275697 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:31:47.301562 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 10 01:31:47.350246 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:31:47.350981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:31:47.361904 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:31:47.382034 systemd[1]: Reloading finished in 1426 ms. Mar 10 01:31:47.470730 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:31:47.484857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:31:47.652970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:31:47.667906 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:31:47.684118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:31:47.713221 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:31:47.743724 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:31:47.762433 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:31:47.799180 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:31:47.846992 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:31:48.036422 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:31:48.131182 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:31:48.172908 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:48.188843 augenrules[1383]: No rules Mar 10 01:31:48.189419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:31:48.205705 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:31:48.230727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:31:48.241986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:31:48.250772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:31:48.258707 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:31:48.274747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:31:48.279155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:48.282165 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:31:48.289742 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:31:48.296804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:31:48.297189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:31:48.303656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:31:48.304987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:31:48.312048 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:31:48.312681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:31:48.400694 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:31:48.518857 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:31:48.519136 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:31:48.519350 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:31:48.536154 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:48.536512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:31:48.636636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:31:48.716076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:31:49.087853 kernel: kvm_amd: TSC scaling supported Mar 10 01:31:49.088133 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:31:49.088167 kernel: kvm_amd: Nested Paging enabled Mar 10 01:31:49.088407 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:31:49.088561 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:31:49.248944 systemd-networkd[1371]: lo: Link UP Mar 10 01:31:49.250122 systemd-networkd[1371]: lo: Gained carrier Mar 10 01:31:49.257789 systemd-networkd[1371]: Enumeration completed Mar 10 01:31:49.269237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:31:49.278968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:31:49.279335 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:31:49.279459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:49.279766 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:31:49.279773 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:31:49.281177 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:31:49.291237 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:31:49.300829 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:31:49.334889 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:31:49.340513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:31:49.340678 systemd-networkd[1371]: eth0: Link UP Mar 10 01:31:49.340686 systemd-networkd[1371]: eth0: Gained carrier Mar 10 01:31:49.340715 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:31:49.341107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:31:49.346412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:31:49.346724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:31:49.353095 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:31:49.356343 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:31:49.378105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:49.378516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:31:49.396548 systemd-resolved[1376]: Positive Trust Anchors: Mar 10 01:31:49.396599 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:31:49.396665 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:31:49.438350 systemd-resolved[1376]: Defaulting to hostname 'linux'. Mar 10 01:31:49.463764 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:31:49.471778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:31:49.484141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:31:49.492797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:31:49.501560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:31:49.524943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:31:49.548676 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:31:49.561839 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:31:49.563454 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:31:49.564964 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:31:49.573433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:31:49.588697 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:31:49.573812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:31:49.588207 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:31:49.588567 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:31:49.595835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:31:49.596525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:31:49.612662 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:31:49.613061 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:31:49.637015 systemd[1]: Finished ensure-sysext.service. Mar 10 01:31:49.648855 systemd[1]: Reached target network.target - Network. Mar 10 01:31:49.654460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:31:49.661606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:31:49.661730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:31:49.674878 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:31:49.684459 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 10 01:31:49.713649 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 10 01:31:49.898772 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:31:49.983650 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:31:49.995658 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:31:51.130861 systemd-resolved[1376]: Clock change detected. Flushing caches. Mar 10 01:31:51.131085 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:31:51.131158 systemd-timesyncd[1432]: Initial clock synchronization to Tue 2026-03-10 01:31:51.130678 UTC. Mar 10 01:31:51.151553 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 10 01:31:51.175986 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:31:51.181154 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:31:51.186191 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:31:51.192499 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:31:51.200029 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:31:51.214747 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:31:51.229891 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:31:51.237389 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:31:51.238101 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:31:51.241326 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:31:51.246928 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:31:51.366661 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:31:51.381142 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:31:51.393117 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 10 01:31:51.404357 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:31:51.410855 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:31:51.419875 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:31:51.426948 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:31:51.427026 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:31:51.431015 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:31:51.445099 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 10 01:31:51.454198 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:31:51.526635 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:31:51.541814 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:31:51.566091 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:31:51.572831 systemd-networkd[1371]: eth0: Gained IPv6LL Mar 10 01:31:51.581487 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:31:51.594956 jq[1442]: false Mar 10 01:31:51.604761 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:31:51.613481 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:31:51.627685 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:31:51.641558 extend-filesystems[1443]: Found loop3 Mar 10 01:31:51.641558 extend-filesystems[1443]: Found loop4 Mar 10 01:31:51.641558 extend-filesystems[1443]: Found loop5 Mar 10 01:31:51.641558 extend-filesystems[1443]: Found sr0 Mar 10 01:31:51.641558 extend-filesystems[1443]: Found vda Mar 10 01:31:51.641558 extend-filesystems[1443]: Found vda1 Mar 10 01:31:51.641558 extend-filesystems[1443]: Found vda2 Mar 10 01:31:51.641558 extend-filesystems[1443]: Found vda3 Mar 10 01:31:51.807424 extend-filesystems[1443]: Found usr Mar 10 01:31:51.807424 extend-filesystems[1443]: Found vda4 Mar 10 01:31:51.807424 extend-filesystems[1443]: Found vda6 Mar 10 01:31:51.807424 extend-filesystems[1443]: Found vda7 Mar 10 01:31:51.807424 extend-filesystems[1443]: Found vda9 Mar 10 01:31:51.807424 extend-filesystems[1443]: Checking size of /dev/vda9 Mar 10 01:31:51.675064 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:31:51.803973 dbus-daemon[1441]: [system] SELinux support is enabled Mar 10 01:31:51.897801 extend-filesystems[1443]: Resized partition /dev/vda9 Mar 10 01:31:51.693209 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:31:51.906046 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Mar 10 01:31:52.002721 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:31:51.695021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:31:52.043520 update_engine[1458]: I20260310 01:31:51.851970 1458 main.cc:92] Flatcar Update Engine starting Mar 10 01:31:52.043520 update_engine[1458]: I20260310 01:31:51.882771 1458 update_check_scheduler.cc:74] Next update check in 10m31s Mar 10 01:31:51.720389 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:31:52.080194 jq[1459]: true Mar 10 01:31:51.807605 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:31:51.830331 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:31:51.847346 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:31:51.877659 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 10 01:31:51.920274 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:31:51.920807 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:31:51.921528 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:31:51.921888 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:31:51.938687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:31:51.939038 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:31:51.941674 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Mar 10 01:31:51.941707 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:31:51.948332 systemd-logind[1452]: New seat seat0. Mar 10 01:31:51.974891 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:31:52.090145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1298) Mar 10 01:31:52.176318 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:31:52.125765 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:31:52.205684 jq[1468]: true Mar 10 01:31:52.178680 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 01:31:52.200120 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:31:52.208504 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:31:52.208504 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:31:52.208504 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:31:52.236164 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Mar 10 01:31:52.210594 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:31:52.225568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:52.263563 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:31:52.270421 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:31:52.270734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:31:52.275119 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:31:52.275149 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:31:52.281177 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:31:52.281772 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:31:52.343772 tar[1467]: linux-amd64/LICENSE Mar 10 01:31:52.343772 tar[1467]: linux-amd64/helm Mar 10 01:31:52.374627 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:31:52.405113 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:31:52.781351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:31:52.900523 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:31:52.901057 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:31:52.921039 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:31:52.922562 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:31:52.967305 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:31:52.975670 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:31:53.020191 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:31:53.233819 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:31:53.277509 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:31:53.478547 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:31:53.520728 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:31:53.522766 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:31:53.556605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:31:54.013550 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:31:54.101916 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:31:54.116036 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:31:54.123409 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:31:54.731960 containerd[1475]: time="2026-03-10T01:31:54.731185627Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 10 01:31:54.803537 containerd[1475]: time="2026-03-10T01:31:54.802880042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.822930714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.823160803Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.823332535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.824015790Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.824045266Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.824476811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:31:54.824752 containerd[1475]: time="2026-03-10T01:31:54.824505424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.826176 containerd[1475]: time="2026-03-10T01:31:54.826144275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:31:54.826315 containerd[1475]: time="2026-03-10T01:31:54.826295537Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.826386 containerd[1475]: time="2026-03-10T01:31:54.826368032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:31:54.826508 containerd[1475]: time="2026-03-10T01:31:54.826483688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.826944 containerd[1475]: time="2026-03-10T01:31:54.826916817Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.827901 containerd[1475]: time="2026-03-10T01:31:54.827873614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 10 01:31:54.828283 containerd[1475]: time="2026-03-10T01:31:54.828192940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 10 01:31:54.828379 containerd[1475]: time="2026-03-10T01:31:54.828355925Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 10 01:31:54.828604 containerd[1475]: time="2026-03-10T01:31:54.828581966Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 10 01:31:54.828719 containerd[1475]: time="2026-03-10T01:31:54.828703123Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:31:54.900378 containerd[1475]: time="2026-03-10T01:31:54.899684846Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 10 01:31:54.900378 containerd[1475]: time="2026-03-10T01:31:54.900543680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 10 01:31:54.900378 containerd[1475]: time="2026-03-10T01:31:54.900621536Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 10 01:31:54.900378 containerd[1475]: time="2026-03-10T01:31:54.900804477Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 10 01:31:54.900378 containerd[1475]: time="2026-03-10T01:31:54.900915895Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 10 01:31:54.903173 containerd[1475]: time="2026-03-10T01:31:54.902072365Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.904836195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905110287Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905145673Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905175729Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905205295Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905303568Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905331380Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905356386Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905387404Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905411028Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905474377Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.905502709Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 10 01:31:54.906138 containerd[1475]: time="2026-03-10T01:31:54.906126024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906164185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906184202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906282706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906342648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906368036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906389245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906412549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906497447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906529597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906547090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906567568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906587665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906612612Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906644812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.906671 containerd[1475]: time="2026-03-10T01:31:54.906664539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.907100 containerd[1475]: time="2026-03-10T01:31:54.906729982Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 10 01:31:54.907100 containerd[1475]: time="2026-03-10T01:31:54.906891523Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 10 01:31:54.907100 containerd[1475]: time="2026-03-10T01:31:54.907056481Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 10 01:31:54.907100 containerd[1475]: time="2026-03-10T01:31:54.907076308Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 10 01:31:54.907100 containerd[1475]: time="2026-03-10T01:31:54.907098470Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 10 01:31:54.907430 containerd[1475]: time="2026-03-10T01:31:54.907116463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.907430 containerd[1475]: time="2026-03-10T01:31:54.907133936Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 10 01:31:54.907430 containerd[1475]: time="2026-03-10T01:31:54.907172849Z" level=info msg="NRI interface is disabled by configuration." Mar 10 01:31:54.907430 containerd[1475]: time="2026-03-10T01:31:54.907192054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 10 01:31:54.909049 containerd[1475]: time="2026-03-10T01:31:54.908094019Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 10 01:31:54.909049 containerd[1475]: time="2026-03-10T01:31:54.908270538Z" level=info msg="Connect containerd service" Mar 10 01:31:54.909049 containerd[1475]: time="2026-03-10T01:31:54.908379953Z" level=info msg="using legacy CRI server" Mar 10 01:31:54.909049 containerd[1475]: time="2026-03-10T01:31:54.908393868Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:31:54.910422 containerd[1475]: time="2026-03-10T01:31:54.909813249Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.911207362Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.911853429Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.911933488Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.912036119Z" level=info msg="Start subscribing containerd event" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.912123173Z" level=info msg="Start recovering state" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.912651188Z" level=info msg="Start event monitor" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.912694048Z" level=info msg="Start snapshots syncer" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.912829761Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:31:54.913539 containerd[1475]: time="2026-03-10T01:31:54.912850531Z" level=info msg="Start streaming server" Mar 10 01:31:54.920020 containerd[1475]: time="2026-03-10T01:31:54.913920088Z" level=info msg="containerd successfully booted in 0.198482s" Mar 10 01:31:54.913675 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:31:56.033068 tar[1467]: linux-amd64/README.md Mar 10 01:31:56.067753 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:31:58.610896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:58.631049 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:31:58.650954 systemd[1]: Startup finished in 5.585s (kernel) + 18.950s (initrd) + 19.501s (userspace) = 44.037s. Mar 10 01:31:58.783060 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:32:01.111997 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:32:01.148131 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:44482.service - OpenSSH per-connection server daemon (10.0.0.1:44482). Mar 10 01:32:01.746159 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 44482 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:01.896088 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:01.932325 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:32:01.943721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:32:01.951356 systemd-logind[1452]: New session 1 of user core. Mar 10 01:32:02.048051 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:32:02.075799 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:32:02.093887 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:32:02.978076 systemd[1570]: Queued start job for default target default.target. Mar 10 01:32:02.999199 systemd[1570]: Created slice app.slice - User Application Slice. Mar 10 01:32:02.999310 systemd[1570]: Reached target paths.target - Paths. Mar 10 01:32:02.999330 systemd[1570]: Reached target timers.target - Timers. Mar 10 01:32:03.025932 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:32:03.125947 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:32:03.126174 systemd[1570]: Reached target sockets.target - Sockets. Mar 10 01:32:03.126195 systemd[1570]: Reached target basic.target - Basic System. Mar 10 01:32:03.126364 systemd[1570]: Reached target default.target - Main User Target. Mar 10 01:32:03.126430 systemd[1570]: Startup finished in 836ms. Mar 10 01:32:03.126938 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:32:03.164915 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:32:03.216993 kubelet[1553]: E0310 01:32:03.216826 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:32:03.222966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:32:03.223398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:32:03.225857 systemd[1]: kubelet.service: Consumed 6.191s CPU time. Mar 10 01:32:03.276738 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:57338.service - OpenSSH per-connection server daemon (10.0.0.1:57338). Mar 10 01:32:03.570132 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 57338 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:03.578061 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:03.597756 systemd-logind[1452]: New session 2 of user core. Mar 10 01:32:03.617568 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:32:03.807802 sshd[1584]: pam_unix(sshd:session): session closed for user core Mar 10 01:32:03.845733 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:57338.service: Deactivated successfully. Mar 10 01:32:03.867441 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:32:03.882395 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:32:03.896442 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:57354.service - OpenSSH per-connection server daemon (10.0.0.1:57354). Mar 10 01:32:03.900373 systemd-logind[1452]: Removed session 2. Mar 10 01:32:04.282381 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 57354 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:04.285983 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:04.476689 systemd-logind[1452]: New session 3 of user core. Mar 10 01:32:04.496826 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:32:04.723037 sshd[1591]: pam_unix(sshd:session): session closed for user core Mar 10 01:32:04.776878 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:57364.service - OpenSSH per-connection server daemon (10.0.0.1:57364). Mar 10 01:32:04.782599 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:57354.service: Deactivated successfully. Mar 10 01:32:04.792024 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:32:04.800950 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:32:04.814014 systemd-logind[1452]: Removed session 3. Mar 10 01:32:04.891711 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 57364 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:04.895614 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:04.924604 systemd-logind[1452]: New session 4 of user core. Mar 10 01:32:04.939501 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:32:05.176342 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 10 01:32:05.196518 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:57364.service: Deactivated successfully. Mar 10 01:32:05.201666 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:32:05.208636 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:32:05.219562 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:57378.service - OpenSSH per-connection server daemon (10.0.0.1:57378). Mar 10 01:32:05.235832 systemd-logind[1452]: Removed session 4. Mar 10 01:32:05.492185 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 57378 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:05.506122 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:05.547842 systemd-logind[1452]: New session 5 of user core. Mar 10 01:32:05.570540 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:32:05.793330 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:32:05.793924 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:32:05.843437 sudo[1608]: pam_unix(sudo:session): session closed for user root Mar 10 01:32:05.861603 sshd[1605]: pam_unix(sshd:session): session closed for user core Mar 10 01:32:05.886154 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:57382.service - OpenSSH per-connection server daemon (10.0.0.1:57382). Mar 10 01:32:05.893376 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:57378.service: Deactivated successfully. Mar 10 01:32:05.897654 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:32:05.952073 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:32:05.962838 systemd-logind[1452]: Removed session 5. Mar 10 01:32:06.045533 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 57382 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:06.061372 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:06.085358 systemd-logind[1452]: New session 6 of user core. Mar 10 01:32:06.095579 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:32:06.225084 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:32:06.230155 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:32:06.264951 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 10 01:32:06.290704 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 10 01:32:06.291531 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:32:06.384690 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 10 01:32:06.390781 auditctl[1620]: No rules Mar 10 01:32:06.392032 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:32:06.392671 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 10 01:32:06.406654 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 10 01:32:06.628964 augenrules[1638]: No rules Mar 10 01:32:06.638286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 10 01:32:06.657429 sudo[1616]: pam_unix(sudo:session): session closed for user root Mar 10 01:32:06.668504 sshd[1611]: pam_unix(sshd:session): session closed for user core Mar 10 01:32:06.686025 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:57382.service: Deactivated successfully. Mar 10 01:32:06.694420 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:32:06.711655 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:32:06.739578 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:57384.service - OpenSSH per-connection server daemon (10.0.0.1:57384). Mar 10 01:32:06.746038 systemd-logind[1452]: Removed session 6. Mar 10 01:32:07.049888 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 57384 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:32:07.065777 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:32:07.101638 systemd-logind[1452]: New session 7 of user core. Mar 10 01:32:07.112706 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:32:07.200303 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:32:07.201347 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:32:10.928993 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:32:10.959750 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:32:13.404587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:32:13.430889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:32:13.771298 dockerd[1667]: time="2026-03-10T01:32:13.770830599Z" level=info msg="Starting up" Mar 10 01:32:14.409022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:32:14.410744 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:32:14.431657 systemd[1]: var-lib-docker-metacopy\x2dcheck2635305900-merged.mount: Deactivated successfully. Mar 10 01:32:14.516721 dockerd[1667]: time="2026-03-10T01:32:14.515885302Z" level=info msg="Loading containers: start." Mar 10 01:32:14.765407 kubelet[1698]: E0310 01:32:14.764969 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:32:14.790639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:32:14.790946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:32:15.257107 kernel: Initializing XFRM netlink socket Mar 10 01:32:15.645798 systemd-networkd[1371]: docker0: Link UP Mar 10 01:32:15.833352 dockerd[1667]: time="2026-03-10T01:32:15.832142418Z" level=info msg="Loading containers: done." Mar 10 01:32:16.040186 dockerd[1667]: time="2026-03-10T01:32:16.039935803Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:32:16.041814 dockerd[1667]: time="2026-03-10T01:32:16.040368802Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 10 01:32:16.041814 dockerd[1667]: time="2026-03-10T01:32:16.040641300Z" level=info msg="Daemon has completed initialization" Mar 10 01:32:16.177345 dockerd[1667]: time="2026-03-10T01:32:16.176511700Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:32:16.176976 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:32:18.240148 containerd[1475]: time="2026-03-10T01:32:18.231773342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 10 01:32:19.760949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954612393.mount: Deactivated successfully. Mar 10 01:32:24.897150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:32:24.915618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:32:25.383474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:32:25.384071 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:32:25.518312 kubelet[1894]: E0310 01:32:25.518173 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:32:25.527157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:32:25.528635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:32:30.058439 containerd[1475]: time="2026-03-10T01:32:30.056530635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:30.062879 containerd[1475]: time="2026-03-10T01:32:30.059365219Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 10 01:32:30.062879 containerd[1475]: time="2026-03-10T01:32:30.062827001Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:30.072726 containerd[1475]: time="2026-03-10T01:32:30.072499363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:30.076668 containerd[1475]: time="2026-03-10T01:32:30.074752017Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 11.842901913s" Mar 10 01:32:30.076668 containerd[1475]: time="2026-03-10T01:32:30.074817728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 10 01:32:30.084971 containerd[1475]: time="2026-03-10T01:32:30.084375872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 10 01:32:35.824556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 01:32:35.920584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:32:36.446394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:32:36.477358 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:32:37.060467 kubelet[1915]: E0310 01:32:37.059403 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:32:37.066897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:32:37.068346 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:32:37.646728 update_engine[1458]: I20260310 01:32:37.646070 1458 update_attempter.cc:509] Updating boot flags... Mar 10 01:32:38.318386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1932) Mar 10 01:32:38.785292 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1935) Mar 10 01:32:40.482981 containerd[1475]: time="2026-03-10T01:32:40.481988560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:40.486618 containerd[1475]: time="2026-03-10T01:32:40.484594739Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 10 01:32:40.487798 containerd[1475]: time="2026-03-10T01:32:40.487648269Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:40.493944 containerd[1475]: time="2026-03-10T01:32:40.493778914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:40.496607 containerd[1475]: time="2026-03-10T01:32:40.496497688Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 10.412077825s" Mar 10 01:32:40.496607 containerd[1475]: time="2026-03-10T01:32:40.496595910Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 10 01:32:40.502955 containerd[1475]: time="2026-03-10T01:32:40.502574705Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 10 01:32:47.210327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 01:32:47.276366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:32:48.322145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:32:48.348929 (kubelet)[1950]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:32:48.903701 kubelet[1950]: E0310 01:32:48.901716 1950 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:32:48.910172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:32:48.910893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:32:48.914294 systemd[1]: kubelet.service: Consumed 1.177s CPU time. Mar 10 01:32:49.343118 containerd[1475]: time="2026-03-10T01:32:49.331817054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:49.343118 containerd[1475]: time="2026-03-10T01:32:49.339463874Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 10 01:32:49.343118 containerd[1475]: time="2026-03-10T01:32:49.342391111Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:49.365262 containerd[1475]: time="2026-03-10T01:32:49.362451904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:49.367853 containerd[1475]: time="2026-03-10T01:32:49.366479537Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 8.863855139s" Mar 10 01:32:49.367853 containerd[1475]: time="2026-03-10T01:32:49.367834562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 10 01:32:49.389516 containerd[1475]: time="2026-03-10T01:32:49.386833484Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 10 01:32:53.684449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1514619139.mount: Deactivated successfully. Mar 10 01:32:56.935468 containerd[1475]: time="2026-03-10T01:32:56.934720770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:56.935468 containerd[1475]: time="2026-03-10T01:32:56.935303984Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 10 01:32:56.945012 containerd[1475]: time="2026-03-10T01:32:56.940041845Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:56.945652 containerd[1475]: time="2026-03-10T01:32:56.945504418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:56.956900 containerd[1475]: time="2026-03-10T01:32:56.947782002Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 7.560897161s" Mar 10 01:32:56.956900 containerd[1475]: time="2026-03-10T01:32:56.947852042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 10 01:32:56.979885 containerd[1475]: time="2026-03-10T01:32:56.972661553Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 10 01:32:58.089489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239165601.mount: Deactivated successfully. Mar 10 01:32:59.445513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 01:32:59.501829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:00.285560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:00.297583 (kubelet)[1993]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:33:00.600804 kubelet[1993]: E0310 01:33:00.599417 1993 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:33:00.612838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:33:00.613386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:33:06.409678 containerd[1475]: time="2026-03-10T01:33:06.403283987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:06.431620 containerd[1475]: time="2026-03-10T01:33:06.431421562Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 10 01:33:06.440454 containerd[1475]: time="2026-03-10T01:33:06.438957089Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:06.463023 containerd[1475]: time="2026-03-10T01:33:06.462757469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:06.472300 containerd[1475]: time="2026-03-10T01:33:06.467376419Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 9.485505823s" Mar 10 01:33:06.472300 containerd[1475]: time="2026-03-10T01:33:06.467476906Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 10 01:33:06.482576 containerd[1475]: time="2026-03-10T01:33:06.479704191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 10 01:33:07.168999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500279802.mount: Deactivated successfully. Mar 10 01:33:07.193150 containerd[1475]: time="2026-03-10T01:33:07.192141876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:07.198681 containerd[1475]: time="2026-03-10T01:33:07.197579805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 10 01:33:07.202662 containerd[1475]: time="2026-03-10T01:33:07.199793875Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:07.207614 containerd[1475]: time="2026-03-10T01:33:07.207520711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:07.208851 containerd[1475]: time="2026-03-10T01:33:07.208642902Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 728.881785ms" Mar 10 01:33:07.208851 containerd[1475]: time="2026-03-10T01:33:07.208714405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 10 01:33:07.212852 containerd[1475]: time="2026-03-10T01:33:07.211664076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 10 01:33:07.993386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016497493.mount: Deactivated successfully. Mar 10 01:33:10.640489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 10 01:33:10.669538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:11.033390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:11.042012 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:33:11.132669 kubelet[2101]: E0310 01:33:11.132521 2101 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:33:11.138554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:33:11.138828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:33:11.523745 containerd[1475]: time="2026-03-10T01:33:11.521638751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:11.529840 containerd[1475]: time="2026-03-10T01:33:11.528685001Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 10 01:33:11.534057 containerd[1475]: time="2026-03-10T01:33:11.533994313Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:11.541720 containerd[1475]: time="2026-03-10T01:33:11.541605651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:11.544152 containerd[1475]: time="2026-03-10T01:33:11.544036179Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 4.332298285s" Mar 10 01:33:11.544152 containerd[1475]: time="2026-03-10T01:33:11.544132900Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 10 01:33:15.684001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:15.703033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:15.783742 systemd[1]: Reloading requested from client PID 2154 ('systemctl') (unit session-7.scope)... Mar 10 01:33:15.784040 systemd[1]: Reloading... Mar 10 01:33:15.995040 zram_generator::config[2196]: No configuration found. Mar 10 01:33:16.202971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:33:16.325561 systemd[1]: Reloading finished in 540 ms. Mar 10 01:33:16.437711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:16.442530 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:33:16.443409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:16.455747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:19.182530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:19.184008 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:33:19.701856 kubelet[2243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:33:19.701856 kubelet[2243]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:33:19.701856 kubelet[2243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:33:19.705138 kubelet[2243]: I0310 01:33:19.701104 2243 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:33:20.910067 kubelet[2243]: I0310 01:33:20.908461 2243 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 10 01:33:20.910067 kubelet[2243]: I0310 01:33:20.908556 2243 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:33:20.910067 kubelet[2243]: I0310 01:33:20.908978 2243 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:33:21.015856 kubelet[2243]: E0310 01:33:21.015799 2243 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:33:21.034021 kubelet[2243]: I0310 01:33:21.033600 2243 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:33:21.082279 kubelet[2243]: E0310 01:33:21.081436 2243 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:33:21.082279 kubelet[2243]: I0310 01:33:21.081475 2243 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 10 01:33:21.141341 kubelet[2243]: I0310 01:33:21.140953 2243 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 10 01:33:21.147199 kubelet[2243]: I0310 01:33:21.147064 2243 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:33:21.147624 kubelet[2243]: I0310 01:33:21.147156 2243 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:33:21.147624 kubelet[2243]: I0310 01:33:21.147614 2243 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:33:21.150287 kubelet[2243]: I0310 01:33:21.147634 2243 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 01:33:21.150287 kubelet[2243]: I0310 01:33:21.148424 2243 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:33:21.168863 kubelet[2243]: I0310 01:33:21.167760 2243 kubelet.go:480] "Attempting to sync node with API server" Mar 10 01:33:21.168863 kubelet[2243]: I0310 01:33:21.167834 2243 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:33:21.168863 kubelet[2243]: I0310 01:33:21.167971 2243 kubelet.go:386] "Adding apiserver pod source" Mar 10 01:33:21.168863 kubelet[2243]: I0310 01:33:21.168018 2243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:33:21.188314 kubelet[2243]: I0310 01:33:21.183980 2243 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:33:21.188314 kubelet[2243]: I0310 01:33:21.184759 2243 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:33:21.188314 kubelet[2243]: E0310 01:33:21.186489 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:33:21.188314 kubelet[2243]: E0310 01:33:21.186884 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:33:21.188314 kubelet[2243]: W0310 01:33:21.187760 2243 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:33:21.207960 kubelet[2243]: I0310 01:33:21.207432 2243 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 10 01:33:21.208819 kubelet[2243]: I0310 01:33:21.208793 2243 server.go:1289] "Started kubelet" Mar 10 01:33:21.210422 kubelet[2243]: I0310 01:33:21.210299 2243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:33:21.213269 kubelet[2243]: I0310 01:33:21.212928 2243 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:33:21.213269 kubelet[2243]: I0310 01:33:21.213004 2243 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:33:21.216717 kubelet[2243]: I0310 01:33:21.216548 2243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:33:21.224418 kubelet[2243]: E0310 01:33:21.220506 2243 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b56e0a8f18ad0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:33:21.207487184 +0000 UTC m=+1.991478215,LastTimestamp:2026-03-10 01:33:21.207487184 +0000 UTC m=+1.991478215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:33:21.227951 kubelet[2243]: I0310 01:33:21.227821 2243 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:33:21.229443 kubelet[2243]: I0310 01:33:21.229415 2243 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:33:21.229674 kubelet[2243]: I0310 01:33:21.229646 2243 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:33:21.233034 kubelet[2243]: I0310 01:33:21.232840 2243 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 10 01:33:21.233034 kubelet[2243]: E0310 01:33:21.233139 2243 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:33:21.233034 kubelet[2243]: I0310 01:33:21.233456 2243 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:33:21.236422 kubelet[2243]: I0310 01:33:21.233583 2243 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 10 01:33:21.236422 kubelet[2243]: I0310 01:33:21.233653 2243 reconciler.go:26] "Reconciler: start to sync state" Mar 10 01:33:21.236422 kubelet[2243]: E0310 01:33:21.234088 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:33:21.236422 kubelet[2243]: E0310 01:33:21.234209 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Mar 10 01:33:21.236847 kubelet[2243]: E0310 01:33:21.236710 2243 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:33:21.239997 kubelet[2243]: I0310 01:33:21.239963 2243 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:33:21.294848 kubelet[2243]: I0310 01:33:21.294724 2243 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:33:21.294848 kubelet[2243]: I0310 01:33:21.294772 2243 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:33:21.294848 kubelet[2243]: I0310 01:33:21.294799 2243 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:33:21.309164 kubelet[2243]: I0310 01:33:21.308659 2243 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 10 01:33:21.312421 kubelet[2243]: I0310 01:33:21.311824 2243 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 10 01:33:21.312421 kubelet[2243]: I0310 01:33:21.311933 2243 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 10 01:33:21.312421 kubelet[2243]: I0310 01:33:21.311976 2243 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:33:21.312421 kubelet[2243]: I0310 01:33:21.312018 2243 kubelet.go:2436] "Starting kubelet main sync loop" Mar 10 01:33:21.312421 kubelet[2243]: E0310 01:33:21.312094 2243 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:33:21.315631 kubelet[2243]: E0310 01:33:21.315548 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:33:21.336451 kubelet[2243]: E0310 01:33:21.336271 2243 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:33:21.413289 kubelet[2243]: E0310 01:33:21.412946 2243 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:33:21.437317 kubelet[2243]: E0310 01:33:21.436550 2243 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:33:21.437487 kubelet[2243]: E0310 01:33:21.436060 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Mar 10 01:33:21.444092 kubelet[2243]: I0310 01:33:21.443984 2243 policy_none.go:49] "None policy: Start" Mar 10 01:33:21.444092 kubelet[2243]: I0310 01:33:21.444079 2243 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 10 01:33:21.444177 kubelet[2243]: I0310 01:33:21.444105 2243 state_mem.go:35] "Initializing new in-memory state store" Mar 10 01:33:21.467830 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:33:21.502206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:33:21.512336 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:33:21.532651 kubelet[2243]: E0310 01:33:21.529384 2243 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:33:21.532651 kubelet[2243]: I0310 01:33:21.529766 2243 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:33:21.532651 kubelet[2243]: I0310 01:33:21.529783 2243 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:33:21.532651 kubelet[2243]: I0310 01:33:21.530940 2243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:33:21.537801 kubelet[2243]: E0310 01:33:21.537681 2243 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:33:21.537801 kubelet[2243]: E0310 01:33:21.537765 2243 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:33:21.640010 kubelet[2243]: I0310 01:33:21.637724 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:21.640010 kubelet[2243]: I0310 01:33:21.637786 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:21.640010 kubelet[2243]: I0310 01:33:21.637816 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:21.640010 kubelet[2243]: I0310 01:33:21.637845 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/118c1261d56ea5d65c14c7b49b502ac2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"118c1261d56ea5d65c14c7b49b502ac2\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:21.640010 kubelet[2243]: I0310 01:33:21.637868 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/118c1261d56ea5d65c14c7b49b502ac2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"118c1261d56ea5d65c14c7b49b502ac2\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:21.640441 kubelet[2243]: I0310 01:33:21.637931 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/118c1261d56ea5d65c14c7b49b502ac2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"118c1261d56ea5d65c14c7b49b502ac2\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:21.640441 kubelet[2243]: I0310 01:33:21.637964 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:21.640441 kubelet[2243]: I0310 01:33:21.637986 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:21.640441 kubelet[2243]: I0310 01:33:21.639179 2243 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:33:21.640441 kubelet[2243]: E0310 01:33:21.639735 2243 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 10 01:33:21.653889 systemd[1]: Created slice kubepods-burstable-pod118c1261d56ea5d65c14c7b49b502ac2.slice - libcontainer container kubepods-burstable-pod118c1261d56ea5d65c14c7b49b502ac2.slice. Mar 10 01:33:21.687179 kubelet[2243]: E0310 01:33:21.684868 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:21.694109 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 10 01:33:21.713657 kubelet[2243]: E0310 01:33:21.713074 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:21.721154 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 10 01:33:21.733037 kubelet[2243]: E0310 01:33:21.732512 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:21.745305 kubelet[2243]: I0310 01:33:21.743035 2243 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:21.842305 kubelet[2243]: E0310 01:33:21.838535 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Mar 10 01:33:21.842305 kubelet[2243]: I0310 01:33:21.841781 2243 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:33:21.842305 kubelet[2243]: E0310 01:33:21.842135 2243 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 10 01:33:21.991702 kubelet[2243]: E0310 01:33:21.991441 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:21.992711 containerd[1475]: time="2026-03-10T01:33:21.992574518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:118c1261d56ea5d65c14c7b49b502ac2,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:22.015398 kubelet[2243]: E0310 01:33:22.015142 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:22.020697 containerd[1475]: time="2026-03-10T01:33:22.019504023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:22.034697 kubelet[2243]: E0310 01:33:22.033483 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:22.034832 containerd[1475]: time="2026-03-10T01:33:22.034067322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:22.078003 kubelet[2243]: E0310 01:33:22.075388 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 01:33:22.249679 kubelet[2243]: I0310 01:33:22.247825 2243 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:33:22.256303 kubelet[2243]: E0310 01:33:22.253623 2243 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 10 01:33:22.383326 kubelet[2243]: E0310 01:33:22.382818 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 01:33:22.643106 kubelet[2243]: E0310 01:33:22.643060 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Mar 10 01:33:22.651653 kubelet[2243]: E0310 01:33:22.651588 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 01:33:22.655597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120107222.mount: Deactivated successfully. Mar 10 01:33:22.681394 containerd[1475]: time="2026-03-10T01:33:22.680763757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:33:22.690504 containerd[1475]: time="2026-03-10T01:33:22.690436614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 10 01:33:22.694305 containerd[1475]: time="2026-03-10T01:33:22.694107184Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:33:22.696132 containerd[1475]: time="2026-03-10T01:33:22.696085256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:33:22.698721 containerd[1475]: time="2026-03-10T01:33:22.698647758Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:33:22.703190 containerd[1475]: time="2026-03-10T01:33:22.703107380Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:33:22.705470 containerd[1475]: time="2026-03-10T01:33:22.705313584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 10 01:33:22.713437 containerd[1475]: time="2026-03-10T01:33:22.710122953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:33:22.713437 containerd[1475]: time="2026-03-10T01:33:22.712431391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 719.666018ms" Mar 10 01:33:22.719966 containerd[1475]: time="2026-03-10T01:33:22.718575138Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 697.583879ms" Mar 10 01:33:22.719966 containerd[1475]: time="2026-03-10T01:33:22.719565708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.414169ms" Mar 10 01:33:22.741758 kubelet[2243]: E0310 01:33:22.741614 2243 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 01:33:22.970649 containerd[1475]: time="2026-03-10T01:33:22.969719438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:33:22.970649 containerd[1475]: time="2026-03-10T01:33:22.970093046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:33:22.970649 containerd[1475]: time="2026-03-10T01:33:22.970117582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:22.971000 containerd[1475]: time="2026-03-10T01:33:22.970433702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:22.983001 containerd[1475]: time="2026-03-10T01:33:22.982751388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:33:22.983001 containerd[1475]: time="2026-03-10T01:33:22.982864007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:33:22.983001 containerd[1475]: time="2026-03-10T01:33:22.982933747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:22.984100 containerd[1475]: time="2026-03-10T01:33:22.983986632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:22.999271 containerd[1475]: time="2026-03-10T01:33:22.990655270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:33:22.999271 containerd[1475]: time="2026-03-10T01:33:22.990762882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:33:22.999271 containerd[1475]: time="2026-03-10T01:33:22.990787297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:23.000122 containerd[1475]: time="2026-03-10T01:33:22.998665099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:23.051955 systemd[1]: Started cri-containerd-123cb8f1438b8ec4cf88020819ad4029401242a0e2d6bba087ab59bc7a8cb664.scope - libcontainer container 123cb8f1438b8ec4cf88020819ad4029401242a0e2d6bba087ab59bc7a8cb664. Mar 10 01:33:23.056553 kubelet[2243]: I0310 01:33:23.056522 2243 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:33:23.057817 kubelet[2243]: E0310 01:33:23.057788 2243 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Mar 10 01:33:23.087561 systemd[1]: Started cri-containerd-2598124b7dbd9b2ba5f4c6ff4672a932e6491c75ffc306e014078e91be4c6ba6.scope - libcontainer container 2598124b7dbd9b2ba5f4c6ff4672a932e6491c75ffc306e014078e91be4c6ba6. Mar 10 01:33:23.103586 systemd[1]: Started cri-containerd-189ec1760bf82715df9c4410599cbbcdd4e4cef7455cf4ee25a237e5739885f7.scope - libcontainer container 189ec1760bf82715df9c4410599cbbcdd4e4cef7455cf4ee25a237e5739885f7. Mar 10 01:33:23.181612 kubelet[2243]: E0310 01:33:23.181466 2243 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:33:23.208285 containerd[1475]: time="2026-03-10T01:33:23.207577447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"123cb8f1438b8ec4cf88020819ad4029401242a0e2d6bba087ab59bc7a8cb664\"" Mar 10 01:33:23.209105 kubelet[2243]: E0310 01:33:23.208973 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:23.218024 containerd[1475]: time="2026-03-10T01:33:23.217881469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:118c1261d56ea5d65c14c7b49b502ac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2598124b7dbd9b2ba5f4c6ff4672a932e6491c75ffc306e014078e91be4c6ba6\"" Mar 10 01:33:23.219380 containerd[1475]: time="2026-03-10T01:33:23.218715494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"189ec1760bf82715df9c4410599cbbcdd4e4cef7455cf4ee25a237e5739885f7\"" Mar 10 01:33:23.219380 containerd[1475]: time="2026-03-10T01:33:23.219175073Z" level=info msg="CreateContainer within sandbox \"123cb8f1438b8ec4cf88020819ad4029401242a0e2d6bba087ab59bc7a8cb664\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:33:23.220755 kubelet[2243]: E0310 01:33:23.220457 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:23.221139 kubelet[2243]: E0310 01:33:23.221112 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:23.233359 containerd[1475]: time="2026-03-10T01:33:23.233193864Z" level=info msg="CreateContainer within sandbox \"2598124b7dbd9b2ba5f4c6ff4672a932e6491c75ffc306e014078e91be4c6ba6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:33:23.242128 containerd[1475]: time="2026-03-10T01:33:23.241991694Z" level=info msg="CreateContainer within sandbox \"189ec1760bf82715df9c4410599cbbcdd4e4cef7455cf4ee25a237e5739885f7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:33:23.300057 containerd[1475]: time="2026-03-10T01:33:23.298149396Z" level=info msg="CreateContainer within sandbox \"123cb8f1438b8ec4cf88020819ad4029401242a0e2d6bba087ab59bc7a8cb664\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0982e9641ca671c1713b2f1ac02f04f3e458767797b18b1f9f3c400fd390c03\"" Mar 10 01:33:23.300057 containerd[1475]: time="2026-03-10T01:33:23.299686805Z" level=info msg="StartContainer for \"f0982e9641ca671c1713b2f1ac02f04f3e458767797b18b1f9f3c400fd390c03\"" Mar 10 01:33:23.327549 containerd[1475]: time="2026-03-10T01:33:23.327388413Z" level=info msg="CreateContainer within sandbox \"2598124b7dbd9b2ba5f4c6ff4672a932e6491c75ffc306e014078e91be4c6ba6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dbb8bc457892b603fd9833d6dc1c435813c096d042f2fec7ff7b141624fcb7ac\"" Mar 10 01:33:23.331331 containerd[1475]: time="2026-03-10T01:33:23.331295149Z" level=info msg="StartContainer for \"dbb8bc457892b603fd9833d6dc1c435813c096d042f2fec7ff7b141624fcb7ac\"" Mar 10 01:33:23.336099 containerd[1475]: time="2026-03-10T01:33:23.335973933Z" level=info msg="CreateContainer within sandbox \"189ec1760bf82715df9c4410599cbbcdd4e4cef7455cf4ee25a237e5739885f7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f03c941b3c9485aa1de2a041fa3a5dbede79a23f49529177b51bb032546ec22\"" Mar 10 01:33:23.338332 containerd[1475]: time="2026-03-10T01:33:23.337737490Z" level=info msg="StartContainer for \"5f03c941b3c9485aa1de2a041fa3a5dbede79a23f49529177b51bb032546ec22\"" Mar 10 01:33:23.429399 systemd[1]: Started cri-containerd-f0982e9641ca671c1713b2f1ac02f04f3e458767797b18b1f9f3c400fd390c03.scope - libcontainer container f0982e9641ca671c1713b2f1ac02f04f3e458767797b18b1f9f3c400fd390c03. Mar 10 01:33:23.463146 systemd[1]: Started cri-containerd-dbb8bc457892b603fd9833d6dc1c435813c096d042f2fec7ff7b141624fcb7ac.scope - libcontainer container dbb8bc457892b603fd9833d6dc1c435813c096d042f2fec7ff7b141624fcb7ac. Mar 10 01:33:23.488572 systemd[1]: Started cri-containerd-5f03c941b3c9485aa1de2a041fa3a5dbede79a23f49529177b51bb032546ec22.scope - libcontainer container 5f03c941b3c9485aa1de2a041fa3a5dbede79a23f49529177b51bb032546ec22. Mar 10 01:33:23.563673 containerd[1475]: time="2026-03-10T01:33:23.563508803Z" level=info msg="StartContainer for \"f0982e9641ca671c1713b2f1ac02f04f3e458767797b18b1f9f3c400fd390c03\" returns successfully" Mar 10 01:33:23.613191 containerd[1475]: time="2026-03-10T01:33:23.613026921Z" level=info msg="StartContainer for \"dbb8bc457892b603fd9833d6dc1c435813c096d042f2fec7ff7b141624fcb7ac\" returns successfully" Mar 10 01:33:23.654276 containerd[1475]: time="2026-03-10T01:33:23.651636677Z" level=info msg="StartContainer for \"5f03c941b3c9485aa1de2a041fa3a5dbede79a23f49529177b51bb032546ec22\" returns successfully" Mar 10 01:33:24.374636 kubelet[2243]: E0310 01:33:24.373667 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:24.374636 kubelet[2243]: E0310 01:33:24.373834 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:24.401273 kubelet[2243]: E0310 01:33:24.399699 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:24.411180 kubelet[2243]: E0310 01:33:24.411110 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:24.414807 kubelet[2243]: E0310 01:33:24.414751 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:24.414961 kubelet[2243]: E0310 01:33:24.414948 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:24.663038 kubelet[2243]: I0310 01:33:24.661878 2243 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:33:25.433631 kubelet[2243]: E0310 01:33:25.433589 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:25.434349 kubelet[2243]: E0310 01:33:25.433797 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:25.441042 kubelet[2243]: E0310 01:33:25.439716 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:25.441042 kubelet[2243]: E0310 01:33:25.439950 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:25.444505 kubelet[2243]: E0310 01:33:25.444352 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:25.444505 kubelet[2243]: E0310 01:33:25.444535 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:26.439828 kubelet[2243]: E0310 01:33:26.438369 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:26.439828 kubelet[2243]: E0310 01:33:26.438569 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:26.439828 kubelet[2243]: E0310 01:33:26.439551 2243 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:33:26.439828 kubelet[2243]: E0310 01:33:26.439684 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:27.178626 kubelet[2243]: I0310 01:33:27.177758 2243 apiserver.go:52] "Watching apiserver" Mar 10 01:33:27.200278 kubelet[2243]: E0310 01:33:27.198638 2243 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:33:27.250151 kubelet[2243]: I0310 01:33:27.245706 2243 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 10 01:33:27.255615 kubelet[2243]: I0310 01:33:27.252489 2243 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:33:27.255615 kubelet[2243]: E0310 01:33:27.252524 2243 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 10 01:33:27.337067 kubelet[2243]: I0310 01:33:27.336297 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:27.368372 kubelet[2243]: E0310 01:33:27.368081 2243 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b56e0a8f18ad0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:33:21.207487184 +0000 UTC m=+1.991478215,LastTimestamp:2026-03-10 01:33:21.207487184 +0000 UTC m=+1.991478215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:33:27.391023 kubelet[2243]: E0310 01:33:27.388894 2243 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:27.391023 kubelet[2243]: I0310 01:33:27.388976 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:27.394028 kubelet[2243]: E0310 01:33:27.392537 2243 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:27.394028 kubelet[2243]: I0310 01:33:27.392563 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:27.396041 kubelet[2243]: E0310 01:33:27.394421 2243 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:27.435168 kubelet[2243]: I0310 01:33:27.434729 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:27.439568 kubelet[2243]: I0310 01:33:27.434729 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:27.443127 kubelet[2243]: E0310 01:33:27.442621 2243 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:27.443868 kubelet[2243]: E0310 01:33:27.443850 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:27.446874 kubelet[2243]: E0310 01:33:27.446405 2243 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:27.446874 kubelet[2243]: E0310 01:33:27.446581 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:29.586425 kubelet[2243]: I0310 01:33:29.584804 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:29.621105 kubelet[2243]: E0310 01:33:29.619708 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:30.482663 kubelet[2243]: E0310 01:33:30.481298 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:34.682930 systemd[1]: Reloading requested from client PID 2542 ('systemctl') (unit session-7.scope)... Mar 10 01:33:34.684131 systemd[1]: Reloading... Mar 10 01:33:35.642900 zram_generator::config[2584]: No configuration found. Mar 10 01:33:36.199892 kubelet[2243]: I0310 01:33:36.197448 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:36.218673 kubelet[2243]: I0310 01:33:36.218579 2243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=7.214755363 podStartE2EDuration="7.214755363s" podCreationTimestamp="2026-03-10 01:33:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:33:31.970121619 +0000 UTC m=+12.754112669" watchObservedRunningTime="2026-03-10 01:33:36.214755363 +0000 UTC m=+16.998746394" Mar 10 01:33:36.220149 kubelet[2243]: E0310 01:33:36.220087 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:36.371918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 10 01:33:36.744990 kubelet[2243]: E0310 01:33:36.744649 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:36.924469 systemd[1]: Reloading finished in 2239 ms. Mar 10 01:33:37.015193 kubelet[2243]: I0310 01:33:37.013891 2243 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:37.193830 kubelet[2243]: E0310 01:33:37.191275 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:37.193830 kubelet[2243]: I0310 01:33:37.212734 2243 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.212433335 podStartE2EDuration="1.212433335s" podCreationTimestamp="2026-03-10 01:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:33:37.078440069 +0000 UTC m=+17.862431100" watchObservedRunningTime="2026-03-10 01:33:37.212433335 +0000 UTC m=+17.996424356" Mar 10 01:33:37.278408 kubelet[2243]: I0310 01:33:37.274054 2243 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:33:37.278194 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:37.300035 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:33:37.300724 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:37.300962 systemd[1]: kubelet.service: Consumed 7.076s CPU time, 140.8M memory peak, 0B memory swap peak. Mar 10 01:33:37.323127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:33:37.941457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:33:37.975847 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:33:38.225003 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:33:38.230425 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 01:33:38.230425 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:33:38.230425 kubelet[2626]: I0310 01:33:38.225479 2626 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 01:33:38.273738 kubelet[2626]: I0310 01:33:38.273654 2626 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 10 01:33:38.273738 kubelet[2626]: I0310 01:33:38.273716 2626 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:33:38.284695 kubelet[2626]: I0310 01:33:38.282520 2626 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 01:33:38.285543 kubelet[2626]: I0310 01:33:38.285443 2626 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:33:38.308062 kubelet[2626]: I0310 01:33:38.307537 2626 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:33:38.323171 kubelet[2626]: E0310 01:33:38.322984 2626 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 10 01:33:38.323171 kubelet[2626]: I0310 01:33:38.323036 2626 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 10 01:33:38.343294 kubelet[2626]: I0310 01:33:38.340864 2626 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 10 01:33:38.343294 kubelet[2626]: I0310 01:33:38.341395 2626 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:33:38.343294 kubelet[2626]: I0310 01:33:38.341441 2626 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:33:38.343294 kubelet[2626]: I0310 01:33:38.341698 2626 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 01:33:38.343673 kubelet[2626]: I0310 01:33:38.341715 2626 container_manager_linux.go:303] "Creating device plugin manager" Mar 10 01:33:38.343673 kubelet[2626]: I0310 01:33:38.342118 2626 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:33:38.343673 kubelet[2626]: I0310 01:33:38.342611 2626 kubelet.go:480] "Attempting to sync node with API server" Mar 10 01:33:38.343673 kubelet[2626]: I0310 01:33:38.342635 2626 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:33:38.343673 kubelet[2626]: I0310 01:33:38.342675 2626 kubelet.go:386] "Adding apiserver pod source" Mar 10 01:33:38.343673 kubelet[2626]: I0310 01:33:38.342713 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:33:38.364735 kubelet[2626]: I0310 01:33:38.364692 2626 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 10 01:33:38.368037 kubelet[2626]: I0310 01:33:38.368010 2626 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:33:38.430444 kubelet[2626]: I0310 01:33:38.429576 2626 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 10 01:33:38.431340 kubelet[2626]: I0310 01:33:38.430954 2626 server.go:1289] "Started kubelet" Mar 10 01:33:38.471641 kubelet[2626]: I0310 01:33:38.470840 2626 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:33:38.484445 kubelet[2626]: I0310 01:33:38.483061 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:33:38.484445 kubelet[2626]: I0310 01:33:38.483839 2626 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:33:38.522355 kubelet[2626]: I0310 01:33:38.521683 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 01:33:38.523559 kubelet[2626]: I0310 01:33:38.522548 2626 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:33:38.523559 kubelet[2626]: E0310 01:33:38.522592 2626 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:33:38.525279 kubelet[2626]: I0310 01:33:38.523856 2626 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:33:38.525748 kubelet[2626]: I0310 01:33:38.525629 2626 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 10 01:33:38.525929 sudo[2643]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 10 01:33:38.526721 sudo[2643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 10 01:33:38.536153 kubelet[2626]: I0310 01:33:38.534554 2626 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 10 01:33:38.536153 kubelet[2626]: I0310 01:33:38.534907 2626 reconciler.go:26] "Reconciler: start to sync state" Mar 10 01:33:38.536685 kubelet[2626]: I0310 01:33:38.536661 2626 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:33:38.541795 kubelet[2626]: I0310 01:33:38.536881 2626 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:33:38.671553 kubelet[2626]: I0310 01:33:38.670401 2626 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:33:38.718893 kubelet[2626]: I0310 01:33:38.718847 2626 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 10 01:33:38.731301 kubelet[2626]: I0310 01:33:38.731206 2626 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 10 01:33:38.731504 kubelet[2626]: I0310 01:33:38.731487 2626 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 10 01:33:38.734371 kubelet[2626]: I0310 01:33:38.731955 2626 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:33:38.734691 kubelet[2626]: I0310 01:33:38.734534 2626 kubelet.go:2436] "Starting kubelet main sync loop" Mar 10 01:33:38.734902 kubelet[2626]: E0310 01:33:38.734863 2626 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:33:38.835293 kubelet[2626]: E0310 01:33:38.835188 2626 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:33:39.035469 kubelet[2626]: I0310 01:33:39.030735 2626 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 01:33:39.040898 kubelet[2626]: E0310 01:33:39.039319 2626 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:33:39.041063 kubelet[2626]: I0310 01:33:39.040696 2626 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 01:33:39.041063 kubelet[2626]: I0310 01:33:39.040938 2626 state_mem.go:36] "Initialized new in-memory state store" Mar 10 01:33:39.044437 kubelet[2626]: I0310 01:33:39.044331 2626 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 01:33:39.044519 kubelet[2626]: I0310 01:33:39.044358 2626 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 01:33:39.044519 kubelet[2626]: I0310 01:33:39.044473 2626 policy_none.go:49] "None policy: Start" Mar 10 01:33:39.044684 kubelet[2626]: I0310 01:33:39.044570 2626 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 10 01:33:39.044684 kubelet[2626]: I0310 01:33:39.044595 2626 state_mem.go:35] "Initializing new in-memory state store" Mar 10 01:33:39.045912 kubelet[2626]: I0310 01:33:39.045807 2626 state_mem.go:75] "Updated machine memory state" Mar 10 01:33:39.081392 kubelet[2626]: E0310 01:33:39.080489 2626 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:33:39.094834 kubelet[2626]: I0310 01:33:39.094653 2626 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 01:33:39.095287 kubelet[2626]: I0310 01:33:39.094993 2626 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:33:39.098312 kubelet[2626]: I0310 01:33:39.098097 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 01:33:39.122306 kubelet[2626]: E0310 01:33:39.113604 2626 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:33:39.402334 kubelet[2626]: I0310 01:33:39.387792 2626 apiserver.go:52] "Watching apiserver" Mar 10 01:33:39.446106 kubelet[2626]: I0310 01:33:39.445147 2626 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 01:33:39.525791 kubelet[2626]: I0310 01:33:39.521479 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:39.583343 kubelet[2626]: I0310 01:33:39.582043 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:39.583343 kubelet[2626]: I0310 01:33:39.583092 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.604447 kubelet[2626]: I0310 01:33:39.604310 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/118c1261d56ea5d65c14c7b49b502ac2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"118c1261d56ea5d65c14c7b49b502ac2\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:39.604447 kubelet[2626]: I0310 01:33:39.604387 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.604447 kubelet[2626]: I0310 01:33:39.604418 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.604447 kubelet[2626]: I0310 01:33:39.604442 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.604900 kubelet[2626]: I0310 01:33:39.604467 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.604900 kubelet[2626]: I0310 01:33:39.604490 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:39.604900 kubelet[2626]: I0310 01:33:39.604673 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/118c1261d56ea5d65c14c7b49b502ac2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"118c1261d56ea5d65c14c7b49b502ac2\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:39.604900 kubelet[2626]: I0310 01:33:39.604730 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/118c1261d56ea5d65c14c7b49b502ac2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"118c1261d56ea5d65c14c7b49b502ac2\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:39.604900 kubelet[2626]: I0310 01:33:39.604796 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.693494 kubelet[2626]: E0310 01:33:39.684129 2626 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 01:33:39.729991 kubelet[2626]: E0310 01:33:39.691888 2626 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:33:39.781677 kubelet[2626]: E0310 01:33:39.739207 2626 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 01:33:39.781677 kubelet[2626]: I0310 01:33:39.780534 2626 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 10 01:33:39.926300 kubelet[2626]: I0310 01:33:39.918352 2626 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 01:33:39.926300 kubelet[2626]: I0310 01:33:39.920454 2626 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 01:33:39.926300 kubelet[2626]: I0310 01:33:39.920559 2626 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:33:39.975634 containerd[1475]: time="2026-03-10T01:33:39.974024768Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:33:39.985859 kubelet[2626]: I0310 01:33:39.981484 2626 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:33:40.000039 kubelet[2626]: E0310 01:33:39.998274 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:40.033389 kubelet[2626]: E0310 01:33:40.033344 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:40.082437 kubelet[2626]: E0310 01:33:40.081317 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:40.086994 systemd[1]: Created slice kubepods-besteffort-pod56ea8463_95bd_4f9a_b975_3d46509f1a5b.slice - libcontainer container kubepods-besteffort-pod56ea8463_95bd_4f9a_b975_3d46509f1a5b.slice. Mar 10 01:33:40.127285 kubelet[2626]: I0310 01:33:40.126167 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw2df\" (UniqueName: \"kubernetes.io/projected/56ea8463-95bd-4f9a-b975-3d46509f1a5b-kube-api-access-gw2df\") pod \"kube-proxy-lc8w9\" (UID: \"56ea8463-95bd-4f9a-b975-3d46509f1a5b\") " pod="kube-system/kube-proxy-lc8w9" Mar 10 01:33:40.127285 kubelet[2626]: I0310 01:33:40.126310 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56ea8463-95bd-4f9a-b975-3d46509f1a5b-kube-proxy\") pod \"kube-proxy-lc8w9\" (UID: \"56ea8463-95bd-4f9a-b975-3d46509f1a5b\") " pod="kube-system/kube-proxy-lc8w9" Mar 10 01:33:40.127285 kubelet[2626]: I0310 01:33:40.126387 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56ea8463-95bd-4f9a-b975-3d46509f1a5b-xtables-lock\") pod \"kube-proxy-lc8w9\" (UID: \"56ea8463-95bd-4f9a-b975-3d46509f1a5b\") " pod="kube-system/kube-proxy-lc8w9" Mar 10 01:33:40.127285 kubelet[2626]: I0310 01:33:40.126412 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56ea8463-95bd-4f9a-b975-3d46509f1a5b-lib-modules\") pod \"kube-proxy-lc8w9\" (UID: \"56ea8463-95bd-4f9a-b975-3d46509f1a5b\") " pod="kube-system/kube-proxy-lc8w9" Mar 10 01:33:40.206450 kubelet[2626]: I0310 01:33:40.205089 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.205067584 podStartE2EDuration="3.205067584s" podCreationTimestamp="2026-03-10 01:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:33:40.140647292 +0000 UTC m=+2.130547849" watchObservedRunningTime="2026-03-10 01:33:40.205067584 +0000 UTC m=+2.194968132" Mar 10 01:33:40.741626 kubelet[2626]: E0310 01:33:40.740398 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:40.798464 containerd[1475]: time="2026-03-10T01:33:40.795199622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lc8w9,Uid:56ea8463-95bd-4f9a-b975-3d46509f1a5b,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:41.024930 kubelet[2626]: E0310 01:33:41.024503 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:41.027272 kubelet[2626]: E0310 01:33:41.025498 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:41.027272 kubelet[2626]: E0310 01:33:41.025887 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:41.161394 containerd[1475]: time="2026-03-10T01:33:41.160737994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:33:41.161394 containerd[1475]: time="2026-03-10T01:33:41.160903982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:33:41.161394 containerd[1475]: time="2026-03-10T01:33:41.160924780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:41.161394 containerd[1475]: time="2026-03-10T01:33:41.161041037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:41.608182 systemd[1]: Started cri-containerd-7833a6dc99b58d8a2909273485161059d0b0e7773c7c7803cd5adfa89beb2cae.scope - libcontainer container 7833a6dc99b58d8a2909273485161059d0b0e7773c7c7803cd5adfa89beb2cae. Mar 10 01:33:41.687584 sudo[2643]: pam_unix(sudo:session): session closed for user root Mar 10 01:33:41.719177 containerd[1475]: time="2026-03-10T01:33:41.719079421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lc8w9,Uid:56ea8463-95bd-4f9a-b975-3d46509f1a5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7833a6dc99b58d8a2909273485161059d0b0e7773c7c7803cd5adfa89beb2cae\"" Mar 10 01:33:41.721263 kubelet[2626]: E0310 01:33:41.721108 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:41.770484 containerd[1475]: time="2026-03-10T01:33:41.755704785Z" level=info msg="CreateContainer within sandbox \"7833a6dc99b58d8a2909273485161059d0b0e7773c7c7803cd5adfa89beb2cae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:33:42.240401 kubelet[2626]: E0310 01:33:42.238423 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:42.248697 kubelet[2626]: E0310 01:33:42.241699 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:42.306209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1137192226.mount: Deactivated successfully. Mar 10 01:33:42.330991 containerd[1475]: time="2026-03-10T01:33:42.330474587Z" level=info msg="CreateContainer within sandbox \"7833a6dc99b58d8a2909273485161059d0b0e7773c7c7803cd5adfa89beb2cae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1c27853679c82376917f5458105eea94e80f3f5a1b9c8f4e34741ea2acb3c10\"" Mar 10 01:33:42.335291 containerd[1475]: time="2026-03-10T01:33:42.332575067Z" level=info msg="StartContainer for \"c1c27853679c82376917f5458105eea94e80f3f5a1b9c8f4e34741ea2acb3c10\"" Mar 10 01:33:42.721862 systemd[1]: Started cri-containerd-c1c27853679c82376917f5458105eea94e80f3f5a1b9c8f4e34741ea2acb3c10.scope - libcontainer container c1c27853679c82376917f5458105eea94e80f3f5a1b9c8f4e34741ea2acb3c10. Mar 10 01:33:43.132420 containerd[1475]: time="2026-03-10T01:33:43.121675132Z" level=info msg="StartContainer for \"c1c27853679c82376917f5458105eea94e80f3f5a1b9c8f4e34741ea2acb3c10\" returns successfully" Mar 10 01:33:43.495932 kubelet[2626]: E0310 01:33:43.494541 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:43.543521 kubelet[2626]: I0310 01:33:43.542787 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lc8w9" podStartSLOduration=4.542503234 podStartE2EDuration="4.542503234s" podCreationTimestamp="2026-03-10 01:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:33:43.534164785 +0000 UTC m=+5.524065343" watchObservedRunningTime="2026-03-10 01:33:43.542503234 +0000 UTC m=+5.532403762" Mar 10 01:33:44.506520 kubelet[2626]: E0310 01:33:44.506450 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:44.609449 kubelet[2626]: I0310 01:33:44.609352 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0168a63-da04-4162-9ead-609751138a20-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-qqmrp\" (UID: \"d0168a63-da04-4162-9ead-609751138a20\") " pod="kube-system/cilium-operator-6c4d7847fc-qqmrp" Mar 10 01:33:44.609614 kubelet[2626]: I0310 01:33:44.609451 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnj9f\" (UniqueName: \"kubernetes.io/projected/d0168a63-da04-4162-9ead-609751138a20-kube-api-access-lnj9f\") pod \"cilium-operator-6c4d7847fc-qqmrp\" (UID: \"d0168a63-da04-4162-9ead-609751138a20\") " pod="kube-system/cilium-operator-6c4d7847fc-qqmrp" Mar 10 01:33:44.635318 systemd[1]: Created slice kubepods-besteffort-podd0168a63_da04_4162_9ead_609751138a20.slice - libcontainer container kubepods-besteffort-podd0168a63_da04_4162_9ead_609751138a20.slice. Mar 10 01:33:44.672804 systemd[1]: Created slice kubepods-burstable-pod1aab988c_165e_410e_a954_fb952044c1dc.slice - libcontainer container kubepods-burstable-pod1aab988c_165e_410e_a954_fb952044c1dc.slice. Mar 10 01:33:44.709934 kubelet[2626]: I0310 01:33:44.709883 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-bpf-maps\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.711047 kubelet[2626]: I0310 01:33:44.710973 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-net\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.711285 kubelet[2626]: I0310 01:33:44.711262 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-kernel\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.711701 kubelet[2626]: I0310 01:33:44.711626 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cni-path\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.711960 kubelet[2626]: I0310 01:33:44.711938 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-hostproc\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.712318 kubelet[2626]: I0310 01:33:44.712204 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-etc-cni-netd\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714278 kubelet[2626]: I0310 01:33:44.712599 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-xtables-lock\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714278 kubelet[2626]: I0310 01:33:44.712861 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-hubble-tls\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714278 kubelet[2626]: I0310 01:33:44.712909 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aab988c-165e-410e-a954-fb952044c1dc-clustermesh-secrets\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714278 kubelet[2626]: I0310 01:33:44.712935 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-run\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714278 kubelet[2626]: I0310 01:33:44.712958 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-cgroup\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714278 kubelet[2626]: I0310 01:33:44.712983 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qtj\" (UniqueName: \"kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-kube-api-access-86qtj\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714516 kubelet[2626]: I0310 01:33:44.713008 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-lib-modules\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.714516 kubelet[2626]: I0310 01:33:44.713035 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aab988c-165e-410e-a954-fb952044c1dc-cilium-config-path\") pod \"cilium-fmp9s\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " pod="kube-system/cilium-fmp9s" Mar 10 01:33:44.980287 kubelet[2626]: E0310 01:33:44.969422 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:45.009344 kubelet[2626]: E0310 01:33:44.989601 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:45.066477 containerd[1475]: time="2026-03-10T01:33:45.004945084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qqmrp,Uid:d0168a63-da04-4162-9ead-609751138a20,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:45.066477 containerd[1475]: time="2026-03-10T01:33:45.008863520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fmp9s,Uid:1aab988c-165e-410e-a954-fb952044c1dc,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:45.443613 kubelet[2626]: E0310 01:33:45.417893 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:45.694521 kubelet[2626]: E0310 01:33:45.690596 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:45.759420 containerd[1475]: time="2026-03-10T01:33:45.757447419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:33:45.759420 containerd[1475]: time="2026-03-10T01:33:45.757534380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:33:45.759420 containerd[1475]: time="2026-03-10T01:33:45.757565858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:45.759420 containerd[1475]: time="2026-03-10T01:33:45.757904970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:45.810529 containerd[1475]: time="2026-03-10T01:33:45.810159873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:33:45.810529 containerd[1475]: time="2026-03-10T01:33:45.810372950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:33:45.810529 containerd[1475]: time="2026-03-10T01:33:45.810398088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:45.811123 containerd[1475]: time="2026-03-10T01:33:45.810622756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:33:45.875443 systemd[1]: Started cri-containerd-28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208.scope - libcontainer container 28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208. Mar 10 01:33:46.035874 systemd[1]: Started cri-containerd-83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf.scope - libcontainer container 83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf. Mar 10 01:33:46.211266 containerd[1475]: time="2026-03-10T01:33:46.211057565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fmp9s,Uid:1aab988c-165e-410e-a954-fb952044c1dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\"" Mar 10 01:33:46.219391 kubelet[2626]: E0310 01:33:46.216425 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:46.222808 containerd[1475]: time="2026-03-10T01:33:46.222444069Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 10 01:33:46.270326 kubelet[2626]: E0310 01:33:46.268082 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:46.296197 containerd[1475]: time="2026-03-10T01:33:46.294311112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-qqmrp,Uid:d0168a63-da04-4162-9ead-609751138a20,Namespace:kube-system,Attempt:0,} returns sandbox id \"83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf\"" Mar 10 01:33:46.296411 kubelet[2626]: E0310 01:33:46.296310 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:46.321882 kubelet[2626]: E0310 01:33:46.321301 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:46.703083 kubelet[2626]: E0310 01:33:46.701611 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:46.703083 kubelet[2626]: E0310 01:33:46.702885 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:04.191013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647324951.mount: Deactivated successfully. Mar 10 01:34:15.902054 containerd[1475]: time="2026-03-10T01:34:15.900420537Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:15.902054 containerd[1475]: time="2026-03-10T01:34:15.901842708Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 10 01:34:15.907519 containerd[1475]: time="2026-03-10T01:34:15.907413676Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:15.910832 containerd[1475]: time="2026-03-10T01:34:15.909185540Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 29.68668673s" Mar 10 01:34:15.910921 containerd[1475]: time="2026-03-10T01:34:15.910787965Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 10 01:34:15.914078 containerd[1475]: time="2026-03-10T01:34:15.913964325Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 10 01:34:15.940516 containerd[1475]: time="2026-03-10T01:34:15.936364478Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:34:15.988805 containerd[1475]: time="2026-03-10T01:34:15.988607829Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\"" Mar 10 01:34:15.991846 containerd[1475]: time="2026-03-10T01:34:15.990120871Z" level=info msg="StartContainer for \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\"" Mar 10 01:34:16.093086 systemd[1]: Started cri-containerd-38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292.scope - libcontainer container 38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292. Mar 10 01:34:16.219605 containerd[1475]: time="2026-03-10T01:34:16.218325133Z" level=info msg="StartContainer for \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\" returns successfully" Mar 10 01:34:16.240198 systemd[1]: cri-containerd-38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292.scope: Deactivated successfully. Mar 10 01:34:16.259893 kubelet[2626]: E0310 01:34:16.258141 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:16.386841 containerd[1475]: time="2026-03-10T01:34:16.386092680Z" level=info msg="shim disconnected" id=38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292 namespace=k8s.io Mar 10 01:34:16.394988 containerd[1475]: time="2026-03-10T01:34:16.391534647Z" level=warning msg="cleaning up after shim disconnected" id=38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292 namespace=k8s.io Mar 10 01:34:16.394988 containerd[1475]: time="2026-03-10T01:34:16.391562119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:34:16.984592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292-rootfs.mount: Deactivated successfully. Mar 10 01:34:17.270740 kubelet[2626]: E0310 01:34:17.266769 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:17.282803 containerd[1475]: time="2026-03-10T01:34:17.282627497Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:34:17.348438 containerd[1475]: time="2026-03-10T01:34:17.348158094Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\"" Mar 10 01:34:17.354150 containerd[1475]: time="2026-03-10T01:34:17.354022792Z" level=info msg="StartContainer for \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\"" Mar 10 01:34:17.639605 systemd[1]: Started cri-containerd-bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932.scope - libcontainer container bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932. Mar 10 01:34:17.728255 containerd[1475]: time="2026-03-10T01:34:17.728053777Z" level=info msg="StartContainer for \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\" returns successfully" Mar 10 01:34:17.766797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:34:17.767831 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:34:17.767928 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:34:17.786130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:34:17.787156 systemd[1]: cri-containerd-bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932.scope: Deactivated successfully. Mar 10 01:34:17.893295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:34:17.897166 containerd[1475]: time="2026-03-10T01:34:17.896825812Z" level=info msg="shim disconnected" id=bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932 namespace=k8s.io Mar 10 01:34:17.897166 containerd[1475]: time="2026-03-10T01:34:17.897058877Z" level=warning msg="cleaning up after shim disconnected" id=bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932 namespace=k8s.io Mar 10 01:34:17.897166 containerd[1475]: time="2026-03-10T01:34:17.897074316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:34:17.971910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932-rootfs.mount: Deactivated successfully. Mar 10 01:34:18.274357 kubelet[2626]: E0310 01:34:18.273487 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:18.294303 containerd[1475]: time="2026-03-10T01:34:18.291846598Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:34:18.346844 containerd[1475]: time="2026-03-10T01:34:18.345874396Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\"" Mar 10 01:34:18.349163 containerd[1475]: time="2026-03-10T01:34:18.348706588Z" level=info msg="StartContainer for \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\"" Mar 10 01:34:18.451913 systemd[1]: Started cri-containerd-4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145.scope - libcontainer container 4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145. Mar 10 01:34:18.544542 containerd[1475]: time="2026-03-10T01:34:18.539074845Z" level=info msg="StartContainer for \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\" returns successfully" Mar 10 01:34:18.539174 systemd[1]: cri-containerd-4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145.scope: Deactivated successfully. Mar 10 01:34:18.706828 containerd[1475]: time="2026-03-10T01:34:18.706436850Z" level=info msg="shim disconnected" id=4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145 namespace=k8s.io Mar 10 01:34:18.706828 containerd[1475]: time="2026-03-10T01:34:18.706519173Z" level=warning msg="cleaning up after shim disconnected" id=4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145 namespace=k8s.io Mar 10 01:34:18.706828 containerd[1475]: time="2026-03-10T01:34:18.706532569Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:34:18.872296 containerd[1475]: time="2026-03-10T01:34:18.871906781Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:18.875619 containerd[1475]: time="2026-03-10T01:34:18.874887970Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 10 01:34:18.876894 containerd[1475]: time="2026-03-10T01:34:18.876801502Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:34:18.880086 containerd[1475]: time="2026-03-10T01:34:18.879875015Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.965849506s" Mar 10 01:34:18.880086 containerd[1475]: time="2026-03-10T01:34:18.879989880Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 10 01:34:18.889544 containerd[1475]: time="2026-03-10T01:34:18.889349757Z" level=info msg="CreateContainer within sandbox \"83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 10 01:34:18.928503 containerd[1475]: time="2026-03-10T01:34:18.928365830Z" level=info msg="CreateContainer within sandbox \"83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\"" Mar 10 01:34:18.932737 containerd[1475]: time="2026-03-10T01:34:18.932618283Z" level=info msg="StartContainer for \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\"" Mar 10 01:34:18.972011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145-rootfs.mount: Deactivated successfully. Mar 10 01:34:19.048601 systemd[1]: Started cri-containerd-f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468.scope - libcontainer container f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468. Mar 10 01:34:19.165890 containerd[1475]: time="2026-03-10T01:34:19.165494252Z" level=info msg="StartContainer for \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\" returns successfully" Mar 10 01:34:19.294830 kubelet[2626]: E0310 01:34:19.293711 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:19.300910 kubelet[2626]: E0310 01:34:19.299912 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:19.318065 containerd[1475]: time="2026-03-10T01:34:19.317573095Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:34:19.396320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268466588.mount: Deactivated successfully. Mar 10 01:34:19.417754 containerd[1475]: time="2026-03-10T01:34:19.416188351Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\"" Mar 10 01:34:19.421517 containerd[1475]: time="2026-03-10T01:34:19.421475530Z" level=info msg="StartContainer for \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\"" Mar 10 01:34:19.505778 kubelet[2626]: I0310 01:34:19.505654 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-qqmrp" podStartSLOduration=2.92351333 podStartE2EDuration="35.505207735s" podCreationTimestamp="2026-03-10 01:33:44 +0000 UTC" firstStartedPulling="2026-03-10 01:33:46.299404252 +0000 UTC m=+8.289304779" lastFinishedPulling="2026-03-10 01:34:18.881098657 +0000 UTC m=+40.870999184" observedRunningTime="2026-03-10 01:34:19.376402288 +0000 UTC m=+41.366302845" watchObservedRunningTime="2026-03-10 01:34:19.505207735 +0000 UTC m=+41.495108262" Mar 10 01:34:19.602082 systemd[1]: Started cri-containerd-a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab.scope - libcontainer container a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab. Mar 10 01:34:19.770653 systemd[1]: cri-containerd-a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab.scope: Deactivated successfully. Mar 10 01:34:19.783358 containerd[1475]: time="2026-03-10T01:34:19.783160877Z" level=info msg="StartContainer for \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\" returns successfully" Mar 10 01:34:19.871882 containerd[1475]: time="2026-03-10T01:34:19.871087328Z" level=info msg="shim disconnected" id=a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab namespace=k8s.io Mar 10 01:34:19.871882 containerd[1475]: time="2026-03-10T01:34:19.871192343Z" level=warning msg="cleaning up after shim disconnected" id=a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab namespace=k8s.io Mar 10 01:34:19.871882 containerd[1475]: time="2026-03-10T01:34:19.871374322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:34:20.318082 kubelet[2626]: E0310 01:34:20.317606 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:20.319988 kubelet[2626]: E0310 01:34:20.318979 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:20.344037 containerd[1475]: time="2026-03-10T01:34:20.343500642Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:34:20.431417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701870640.mount: Deactivated successfully. Mar 10 01:34:20.625308 containerd[1475]: time="2026-03-10T01:34:20.618812966Z" level=info msg="CreateContainer within sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\"" Mar 10 01:34:20.625308 containerd[1475]: time="2026-03-10T01:34:20.621028830Z" level=info msg="StartContainer for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\"" Mar 10 01:34:20.903641 systemd[1]: Started cri-containerd-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449.scope - libcontainer container 0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449. Mar 10 01:34:21.210982 containerd[1475]: time="2026-03-10T01:34:21.210581141Z" level=info msg="StartContainer for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" returns successfully" Mar 10 01:34:24.008148 kubelet[2626]: I0310 01:34:24.007390 2626 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 10 01:34:24.267962 kubelet[2626]: E0310 01:34:24.267043 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:24.303410 systemd[1]: Created slice kubepods-burstable-pod04686b24_52fa_4b38_9366_360dbdcc9426.slice - libcontainer container kubepods-burstable-pod04686b24_52fa_4b38_9366_360dbdcc9426.slice. Mar 10 01:34:24.319470 systemd[1]: Created slice kubepods-burstable-podc60f91bb_601e_4707_9242_e02c9e862843.slice - libcontainer container kubepods-burstable-podc60f91bb_601e_4707_9242_e02c9e862843.slice. Mar 10 01:34:24.398362 kubelet[2626]: I0310 01:34:24.396418 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fmp9s" podStartSLOduration=10.705326895 podStartE2EDuration="40.396399612s" podCreationTimestamp="2026-03-10 01:33:44 +0000 UTC" firstStartedPulling="2026-03-10 01:33:46.221949456 +0000 UTC m=+8.211849993" lastFinishedPulling="2026-03-10 01:34:15.913022173 +0000 UTC m=+37.902922710" observedRunningTime="2026-03-10 01:34:24.392838461 +0000 UTC m=+46.382738987" watchObservedRunningTime="2026-03-10 01:34:24.396399612 +0000 UTC m=+46.386300139" Mar 10 01:34:24.454128 kubelet[2626]: I0310 01:34:24.454017 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c60f91bb-601e-4707-9242-e02c9e862843-config-volume\") pod \"coredns-674b8bbfcf-2x7t7\" (UID: \"c60f91bb-601e-4707-9242-e02c9e862843\") " pod="kube-system/coredns-674b8bbfcf-2x7t7" Mar 10 01:34:24.454128 kubelet[2626]: I0310 01:34:24.454104 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4whm2\" (UniqueName: \"kubernetes.io/projected/c60f91bb-601e-4707-9242-e02c9e862843-kube-api-access-4whm2\") pod \"coredns-674b8bbfcf-2x7t7\" (UID: \"c60f91bb-601e-4707-9242-e02c9e862843\") " pod="kube-system/coredns-674b8bbfcf-2x7t7" Mar 10 01:34:24.454704 kubelet[2626]: I0310 01:34:24.454291 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2jpw\" (UniqueName: \"kubernetes.io/projected/04686b24-52fa-4b38-9366-360dbdcc9426-kube-api-access-h2jpw\") pod \"coredns-674b8bbfcf-wlsjp\" (UID: \"04686b24-52fa-4b38-9366-360dbdcc9426\") " pod="kube-system/coredns-674b8bbfcf-wlsjp" Mar 10 01:34:24.454704 kubelet[2626]: I0310 01:34:24.454332 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04686b24-52fa-4b38-9366-360dbdcc9426-config-volume\") pod \"coredns-674b8bbfcf-wlsjp\" (UID: \"04686b24-52fa-4b38-9366-360dbdcc9426\") " pod="kube-system/coredns-674b8bbfcf-wlsjp" Mar 10 01:34:24.672731 kubelet[2626]: E0310 01:34:24.664631 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:24.712720 containerd[1475]: time="2026-03-10T01:34:24.712466629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x7t7,Uid:c60f91bb-601e-4707-9242-e02c9e862843,Namespace:kube-system,Attempt:0,}" Mar 10 01:34:24.911026 kubelet[2626]: E0310 01:34:24.910549 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:24.916378 containerd[1475]: time="2026-03-10T01:34:24.916270888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wlsjp,Uid:04686b24-52fa-4b38-9366-360dbdcc9426,Namespace:kube-system,Attempt:0,}" Mar 10 01:34:25.271840 kubelet[2626]: E0310 01:34:25.271577 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:27.380868 systemd-networkd[1371]: cilium_host: Link UP Mar 10 01:34:27.383117 systemd-networkd[1371]: cilium_net: Link UP Mar 10 01:34:27.389064 systemd-networkd[1371]: cilium_net: Gained carrier Mar 10 01:34:27.389503 systemd-networkd[1371]: cilium_host: Gained carrier Mar 10 01:34:27.391621 systemd-networkd[1371]: cilium_net: Gained IPv6LL Mar 10 01:34:27.393698 systemd-networkd[1371]: cilium_host: Gained IPv6LL Mar 10 01:34:27.836828 systemd-networkd[1371]: cilium_vxlan: Link UP Mar 10 01:34:27.836837 systemd-networkd[1371]: cilium_vxlan: Gained carrier Mar 10 01:34:27.873463 systemd[1]: run-containerd-runc-k8s.io-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449-runc.o7EpNK.mount: Deactivated successfully. Mar 10 01:34:28.374694 kernel: NET: Registered PF_ALG protocol family Mar 10 01:34:28.991022 systemd-networkd[1371]: cilium_vxlan: Gained IPv6LL Mar 10 01:34:30.265857 systemd[1]: run-containerd-runc-k8s.io-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449-runc.rkWgzY.mount: Deactivated successfully. Mar 10 01:34:30.975786 systemd-networkd[1371]: lxc_health: Link UP Mar 10 01:34:31.013518 systemd-networkd[1371]: lxc_health: Gained carrier Mar 10 01:34:31.023366 kubelet[2626]: E0310 01:34:31.022565 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:31.371097 systemd-networkd[1371]: lxc8e6377d09f51: Link UP Mar 10 01:34:31.394381 kernel: eth0: renamed from tmp359fb Mar 10 01:34:31.407910 systemd-networkd[1371]: lxc8e6377d09f51: Gained carrier Mar 10 01:34:31.422094 systemd-networkd[1371]: lxcacb6bebc073f: Link UP Mar 10 01:34:31.450359 kernel: eth0: renamed from tmpe521d Mar 10 01:34:31.478797 systemd-networkd[1371]: lxcacb6bebc073f: Gained carrier Mar 10 01:34:31.852977 kubelet[2626]: E0310 01:34:31.846757 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:32.701552 systemd-networkd[1371]: lxc_health: Gained IPv6LL Mar 10 01:34:32.763645 systemd-networkd[1371]: lxcacb6bebc073f: Gained IPv6LL Mar 10 01:34:33.282448 systemd-networkd[1371]: lxc8e6377d09f51: Gained IPv6LL Mar 10 01:34:37.334092 systemd[1]: run-containerd-runc-k8s.io-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449-runc.5GPKb8.mount: Deactivated successfully. Mar 10 01:34:39.669932 containerd[1475]: time="2026-03-10T01:34:39.668092892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:34:39.669932 containerd[1475]: time="2026-03-10T01:34:39.668466951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:34:39.669932 containerd[1475]: time="2026-03-10T01:34:39.668587666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.669932 containerd[1475]: time="2026-03-10T01:34:39.669305053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.713462 containerd[1475]: time="2026-03-10T01:34:39.711031884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:34:39.713462 containerd[1475]: time="2026-03-10T01:34:39.711138613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:34:39.713462 containerd[1475]: time="2026-03-10T01:34:39.711165042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.713462 containerd[1475]: time="2026-03-10T01:34:39.711760423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:34:39.776175 systemd[1]: Started cri-containerd-359fb98a88d08c99d3e4c9386a719532a0c699ce56a48e115d1922feeb8ef6ec.scope - libcontainer container 359fb98a88d08c99d3e4c9386a719532a0c699ce56a48e115d1922feeb8ef6ec. Mar 10 01:34:39.805679 systemd[1]: Started cri-containerd-e521df15c7e16b191fc84eaf32af6c2f1fc9d1c286f490629760db266fa00489.scope - libcontainer container e521df15c7e16b191fc84eaf32af6c2f1fc9d1c286f490629760db266fa00489. Mar 10 01:34:39.844073 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:34:39.857312 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:34:40.083762 containerd[1475]: time="2026-03-10T01:34:40.082833024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2x7t7,Uid:c60f91bb-601e-4707-9242-e02c9e862843,Namespace:kube-system,Attempt:0,} returns sandbox id \"359fb98a88d08c99d3e4c9386a719532a0c699ce56a48e115d1922feeb8ef6ec\"" Mar 10 01:34:40.108834 kubelet[2626]: E0310 01:34:40.107763 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:40.134960 containerd[1475]: time="2026-03-10T01:34:40.134696498Z" level=info msg="CreateContainer within sandbox \"359fb98a88d08c99d3e4c9386a719532a0c699ce56a48e115d1922feeb8ef6ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:34:40.158561 containerd[1475]: time="2026-03-10T01:34:40.158395410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wlsjp,Uid:04686b24-52fa-4b38-9366-360dbdcc9426,Namespace:kube-system,Attempt:0,} returns sandbox id \"e521df15c7e16b191fc84eaf32af6c2f1fc9d1c286f490629760db266fa00489\"" Mar 10 01:34:40.165424 kubelet[2626]: E0310 01:34:40.164957 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:40.218776 containerd[1475]: time="2026-03-10T01:34:40.214350125Z" level=info msg="CreateContainer within sandbox \"e521df15c7e16b191fc84eaf32af6c2f1fc9d1c286f490629760db266fa00489\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:34:40.232393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641043651.mount: Deactivated successfully. Mar 10 01:34:40.280026 containerd[1475]: time="2026-03-10T01:34:40.279806078Z" level=info msg="CreateContainer within sandbox \"359fb98a88d08c99d3e4c9386a719532a0c699ce56a48e115d1922feeb8ef6ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ded7bc45a706721ab88e9fd182122dbac111a6de179d41a76a4c2280aaacc859\"" Mar 10 01:34:40.292024 containerd[1475]: time="2026-03-10T01:34:40.290598281Z" level=info msg="StartContainer for \"ded7bc45a706721ab88e9fd182122dbac111a6de179d41a76a4c2280aaacc859\"" Mar 10 01:34:40.361180 containerd[1475]: time="2026-03-10T01:34:40.360995348Z" level=info msg="CreateContainer within sandbox \"e521df15c7e16b191fc84eaf32af6c2f1fc9d1c286f490629760db266fa00489\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b73280fb8ba9860ed5efc04c35157f97e08db07d784e48a72ca47ad314c85442\"" Mar 10 01:34:40.364139 containerd[1475]: time="2026-03-10T01:34:40.363093973Z" level=info msg="StartContainer for \"b73280fb8ba9860ed5efc04c35157f97e08db07d784e48a72ca47ad314c85442\"" Mar 10 01:34:40.406573 sudo[1649]: pam_unix(sudo:session): session closed for user root Mar 10 01:34:40.423123 sshd[1646]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:40.439616 systemd[1]: Started cri-containerd-ded7bc45a706721ab88e9fd182122dbac111a6de179d41a76a4c2280aaacc859.scope - libcontainer container ded7bc45a706721ab88e9fd182122dbac111a6de179d41a76a4c2280aaacc859. Mar 10 01:34:40.446171 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:57384.service: Deactivated successfully. Mar 10 01:34:40.458897 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:34:40.459598 systemd[1]: session-7.scope: Consumed 15.118s CPU time, 162.6M memory peak, 0B memory swap peak. Mar 10 01:34:40.463358 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:34:40.479673 systemd[1]: Started cri-containerd-b73280fb8ba9860ed5efc04c35157f97e08db07d784e48a72ca47ad314c85442.scope - libcontainer container b73280fb8ba9860ed5efc04c35157f97e08db07d784e48a72ca47ad314c85442. Mar 10 01:34:40.483883 systemd-logind[1452]: Removed session 7. Mar 10 01:34:40.567509 containerd[1475]: time="2026-03-10T01:34:40.567417067Z" level=info msg="StartContainer for \"ded7bc45a706721ab88e9fd182122dbac111a6de179d41a76a4c2280aaacc859\" returns successfully" Mar 10 01:34:40.570817 containerd[1475]: time="2026-03-10T01:34:40.567689054Z" level=info msg="StartContainer for \"b73280fb8ba9860ed5efc04c35157f97e08db07d784e48a72ca47ad314c85442\" returns successfully" Mar 10 01:34:41.021904 kubelet[2626]: E0310 01:34:41.021472 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:41.048688 kubelet[2626]: E0310 01:34:41.048459 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:41.178373 kubelet[2626]: I0310 01:34:41.177904 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2x7t7" podStartSLOduration=62.177883282 podStartE2EDuration="1m2.177883282s" podCreationTimestamp="2026-03-10 01:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:34:41.159609084 +0000 UTC m=+63.149509621" watchObservedRunningTime="2026-03-10 01:34:41.177883282 +0000 UTC m=+63.167783809" Mar 10 01:34:41.206153 kubelet[2626]: I0310 01:34:41.206026 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wlsjp" podStartSLOduration=61.206002232 podStartE2EDuration="1m1.206002232s" podCreationTimestamp="2026-03-10 01:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:34:41.19888267 +0000 UTC m=+63.188783228" watchObservedRunningTime="2026-03-10 01:34:41.206002232 +0000 UTC m=+63.195902759" Mar 10 01:34:42.091178 kubelet[2626]: E0310 01:34:42.084605 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:43.123056 kubelet[2626]: E0310 01:34:43.118806 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:49.754114 kubelet[2626]: E0310 01:34:49.751576 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:51.067777 kubelet[2626]: E0310 01:34:51.063019 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:51.295972 kubelet[2626]: E0310 01:34:51.295912 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:52.758551 kubelet[2626]: E0310 01:34:52.758037 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:10.745182 kubelet[2626]: E0310 01:35:10.742620 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:11.747681 kubelet[2626]: E0310 01:35:11.747085 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:50.741112 kubelet[2626]: E0310 01:35:50.740176 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:52.739309 kubelet[2626]: E0310 01:35:52.736087 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:55.271839 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:46710.service - OpenSSH per-connection server daemon (10.0.0.1:46710). Mar 10 01:35:55.337344 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 46710 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:35:55.340443 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:55.353942 systemd-logind[1452]: New session 8 of user core. Mar 10 01:35:55.365941 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:35:55.583493 sshd[4198]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:55.593191 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:46710.service: Deactivated successfully. Mar 10 01:35:55.596651 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:35:55.597753 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:35:55.600279 systemd-logind[1452]: Removed session 8. Mar 10 01:35:58.736961 kubelet[2626]: E0310 01:35:58.736840 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:00.620379 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:46718.service - OpenSSH per-connection server daemon (10.0.0.1:46718). Mar 10 01:36:00.707810 sshd[4215]: Accepted publickey for core from 10.0.0.1 port 46718 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:00.718520 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:00.743342 kubelet[2626]: E0310 01:36:00.742722 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:00.747972 systemd-logind[1452]: New session 9 of user core. Mar 10 01:36:00.772029 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:36:01.089374 sshd[4215]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:01.097964 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:46718.service: Deactivated successfully. Mar 10 01:36:01.104062 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:36:01.111805 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:36:01.117312 systemd-logind[1452]: Removed session 9. Mar 10 01:36:06.147358 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:51516.service - OpenSSH per-connection server daemon (10.0.0.1:51516). Mar 10 01:36:06.197208 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:06.199771 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:06.213331 systemd-logind[1452]: New session 10 of user core. Mar 10 01:36:06.226189 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:36:06.539707 sshd[4231]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:06.562903 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:51516.service: Deactivated successfully. Mar 10 01:36:06.576050 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:36:06.578063 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:36:06.588090 systemd-logind[1452]: Removed session 10. Mar 10 01:36:07.740761 kubelet[2626]: E0310 01:36:07.739408 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:10.737364 kubelet[2626]: E0310 01:36:10.736945 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:11.593573 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:51528.service - OpenSSH per-connection server daemon (10.0.0.1:51528). Mar 10 01:36:11.650104 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 51528 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:11.671795 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:11.693914 systemd-logind[1452]: New session 11 of user core. Mar 10 01:36:11.720118 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:36:11.971391 sshd[4246]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:11.978362 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:51528.service: Deactivated successfully. Mar 10 01:36:11.981806 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:36:11.983343 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:36:11.986011 systemd-logind[1452]: Removed session 11. Mar 10 01:36:16.998600 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:35568.service - OpenSSH per-connection server daemon (10.0.0.1:35568). Mar 10 01:36:17.072771 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 35568 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:17.075449 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:17.085745 systemd-logind[1452]: New session 12 of user core. Mar 10 01:36:17.092078 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:36:17.317611 sshd[4264]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:17.325719 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:35568.service: Deactivated successfully. Mar 10 01:36:17.331614 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:36:17.335504 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:36:17.340581 systemd-logind[1452]: Removed session 12. Mar 10 01:36:22.587489 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:51442.service - OpenSSH per-connection server daemon (10.0.0.1:51442). Mar 10 01:36:24.492314 kubelet[2626]: E0310 01:36:24.491145 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:24.510699 sshd[4279]: Accepted publickey for core from 10.0.0.1 port 51442 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:24.515951 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:24.541479 systemd-logind[1452]: New session 13 of user core. Mar 10 01:36:24.566117 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:36:25.740134 kubelet[2626]: E0310 01:36:25.739966 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:25.802091 sshd[4279]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:25.831808 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:51442.service: Deactivated successfully. Mar 10 01:36:25.839027 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:36:25.894570 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:36:25.925593 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:51446.service - OpenSSH per-connection server daemon (10.0.0.1:51446). Mar 10 01:36:25.929349 systemd-logind[1452]: Removed session 13. Mar 10 01:36:26.019027 sshd[4294]: Accepted publickey for core from 10.0.0.1 port 51446 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:26.025523 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:26.079373 systemd-logind[1452]: New session 14 of user core. Mar 10 01:36:26.117523 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:36:26.506326 sshd[4294]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:26.519434 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:51446.service: Deactivated successfully. Mar 10 01:36:26.522536 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:36:26.525667 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:36:26.545118 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:51454.service - OpenSSH per-connection server daemon (10.0.0.1:51454). Mar 10 01:36:26.546757 systemd-logind[1452]: Removed session 14. Mar 10 01:36:26.617703 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 51454 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:26.621074 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:26.634829 systemd-logind[1452]: New session 15 of user core. Mar 10 01:36:26.645133 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:36:27.009949 sshd[4307]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:27.014965 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:51454.service: Deactivated successfully. Mar 10 01:36:27.018944 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:36:27.023992 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:36:27.026963 systemd-logind[1452]: Removed session 15. Mar 10 01:36:32.031025 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:51456.service - OpenSSH per-connection server daemon (10.0.0.1:51456). Mar 10 01:36:32.099462 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 51456 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:32.103558 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:32.121885 systemd-logind[1452]: New session 16 of user core. Mar 10 01:36:32.140739 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:36:32.335142 sshd[4322]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:32.345581 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:51456.service: Deactivated successfully. Mar 10 01:36:32.349955 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:36:32.354014 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:36:32.367181 systemd-logind[1452]: Removed session 16. Mar 10 01:36:37.448917 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:54938.service - OpenSSH per-connection server daemon (10.0.0.1:54938). Mar 10 01:36:37.537677 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 54938 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:37.542573 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:37.579179 systemd-logind[1452]: New session 17 of user core. Mar 10 01:36:37.597868 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:36:37.893833 sshd[4337]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:37.988122 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:54938.service: Deactivated successfully. Mar 10 01:36:37.999033 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:36:38.003414 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:36:38.005470 systemd-logind[1452]: Removed session 17. Mar 10 01:36:42.909509 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:50746.service - OpenSSH per-connection server daemon (10.0.0.1:50746). Mar 10 01:36:43.007798 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 50746 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:43.010149 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:43.025088 systemd-logind[1452]: New session 18 of user core. Mar 10 01:36:43.031494 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:36:43.271773 sshd[4353]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:43.287915 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:50746.service: Deactivated successfully. Mar 10 01:36:43.292418 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:36:43.297509 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:36:43.310081 systemd-logind[1452]: Removed session 18. Mar 10 01:36:48.327763 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:50748.service - OpenSSH per-connection server daemon (10.0.0.1:50748). Mar 10 01:36:48.400650 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 50748 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:48.404728 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:48.426335 systemd-logind[1452]: New session 19 of user core. Mar 10 01:36:48.435729 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:36:48.685077 sshd[4372]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:48.696496 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:50748.service: Deactivated successfully. Mar 10 01:36:48.701090 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:36:48.707337 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:36:48.709823 systemd-logind[1452]: Removed session 19. Mar 10 01:36:53.728892 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:45022.service - OpenSSH per-connection server daemon (10.0.0.1:45022). Mar 10 01:36:53.834967 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 45022 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:53.839751 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:53.869568 systemd-logind[1452]: New session 20 of user core. Mar 10 01:36:53.885057 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:36:54.121568 sshd[4386]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:54.129185 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:45022.service: Deactivated successfully. Mar 10 01:36:54.132413 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:36:54.134683 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:36:54.137149 systemd-logind[1452]: Removed session 20. Mar 10 01:36:56.739092 kubelet[2626]: E0310 01:36:56.738828 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:36:59.148340 systemd[1]: Started sshd@20-10.0.0.144:22-10.0.0.1:45038.service - OpenSSH per-connection server daemon (10.0.0.1:45038). Mar 10 01:36:59.256051 sshd[4403]: Accepted publickey for core from 10.0.0.1 port 45038 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:36:59.260283 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:36:59.282326 systemd-logind[1452]: New session 21 of user core. Mar 10 01:36:59.285713 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:36:59.560584 sshd[4403]: pam_unix(sshd:session): session closed for user core Mar 10 01:36:59.569090 systemd[1]: sshd@20-10.0.0.144:22-10.0.0.1:45038.service: Deactivated successfully. Mar 10 01:36:59.578721 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:36:59.580625 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:36:59.586291 systemd-logind[1452]: Removed session 21. Mar 10 01:37:01.741201 kubelet[2626]: E0310 01:37:01.739111 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:04.609075 systemd[1]: Started sshd@21-10.0.0.144:22-10.0.0.1:47296.service - OpenSSH per-connection server daemon (10.0.0.1:47296). Mar 10 01:37:04.658906 sshd[4419]: Accepted publickey for core from 10.0.0.1 port 47296 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:04.661800 sshd[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:04.683574 systemd-logind[1452]: New session 22 of user core. Mar 10 01:37:04.693018 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:37:04.928833 sshd[4419]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:04.945493 systemd[1]: sshd@21-10.0.0.144:22-10.0.0.1:47296.service: Deactivated successfully. Mar 10 01:37:04.949802 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:37:04.962943 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:37:04.980320 systemd[1]: Started sshd@22-10.0.0.144:22-10.0.0.1:47308.service - OpenSSH per-connection server daemon (10.0.0.1:47308). Mar 10 01:37:04.988739 systemd-logind[1452]: Removed session 22. Mar 10 01:37:05.032280 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 47308 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:05.036826 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:05.052185 systemd-logind[1452]: New session 23 of user core. Mar 10 01:37:05.065081 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:37:05.760388 sshd[4434]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:05.780786 systemd[1]: sshd@22-10.0.0.144:22-10.0.0.1:47308.service: Deactivated successfully. Mar 10 01:37:05.786306 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:37:05.790540 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:37:05.802388 systemd[1]: Started sshd@23-10.0.0.144:22-10.0.0.1:47324.service - OpenSSH per-connection server daemon (10.0.0.1:47324). Mar 10 01:37:05.804123 systemd-logind[1452]: Removed session 23. Mar 10 01:37:05.889592 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 47324 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:05.895061 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:05.910846 systemd-logind[1452]: New session 24 of user core. Mar 10 01:37:05.919077 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:37:07.190877 sshd[4447]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:07.210457 systemd[1]: sshd@23-10.0.0.144:22-10.0.0.1:47324.service: Deactivated successfully. Mar 10 01:37:07.215907 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:37:07.221105 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:37:07.238590 systemd[1]: Started sshd@24-10.0.0.144:22-10.0.0.1:47328.service - OpenSSH per-connection server daemon (10.0.0.1:47328). Mar 10 01:37:07.245029 systemd-logind[1452]: Removed session 24. Mar 10 01:37:07.298517 sshd[4478]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:07.301485 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:07.318534 systemd-logind[1452]: New session 25 of user core. Mar 10 01:37:07.322827 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:37:07.750475 sshd[4478]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:07.766361 systemd[1]: sshd@24-10.0.0.144:22-10.0.0.1:47328.service: Deactivated successfully. Mar 10 01:37:07.770860 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:37:07.774629 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:37:07.782859 systemd[1]: Started sshd@25-10.0.0.144:22-10.0.0.1:47330.service - OpenSSH per-connection server daemon (10.0.0.1:47330). Mar 10 01:37:07.787717 systemd-logind[1452]: Removed session 25. Mar 10 01:37:07.862010 sshd[4490]: Accepted publickey for core from 10.0.0.1 port 47330 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:07.866878 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:07.888928 systemd-logind[1452]: New session 26 of user core. Mar 10 01:37:07.903614 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:37:08.242306 sshd[4490]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:08.248828 systemd[1]: sshd@25-10.0.0.144:22-10.0.0.1:47330.service: Deactivated successfully. Mar 10 01:37:08.254709 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:37:08.263302 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:37:08.267952 systemd-logind[1452]: Removed session 26. Mar 10 01:37:13.288802 systemd[1]: Started sshd@26-10.0.0.144:22-10.0.0.1:38486.service - OpenSSH per-connection server daemon (10.0.0.1:38486). Mar 10 01:37:13.359781 sshd[4504]: Accepted publickey for core from 10.0.0.1 port 38486 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:13.363067 sshd[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:13.378447 systemd-logind[1452]: New session 27 of user core. Mar 10 01:37:13.395717 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:37:13.743159 sshd[4504]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:13.756394 systemd[1]: sshd@26-10.0.0.144:22-10.0.0.1:38486.service: Deactivated successfully. Mar 10 01:37:13.761410 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:37:13.766465 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:37:13.770110 systemd-logind[1452]: Removed session 27. Mar 10 01:37:15.740363 kubelet[2626]: E0310 01:37:15.737551 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:18.770816 systemd[1]: Started sshd@27-10.0.0.144:22-10.0.0.1:38500.service - OpenSSH per-connection server daemon (10.0.0.1:38500). Mar 10 01:37:18.830699 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 38500 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:18.834127 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:18.848736 systemd-logind[1452]: New session 28 of user core. Mar 10 01:37:18.859028 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:37:19.077024 sshd[4521]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:19.092732 systemd[1]: sshd@27-10.0.0.144:22-10.0.0.1:38500.service: Deactivated successfully. Mar 10 01:37:19.096602 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:37:19.098699 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:37:19.100323 systemd-logind[1452]: Removed session 28. Mar 10 01:37:19.737046 kubelet[2626]: E0310 01:37:19.736064 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:21.739133 kubelet[2626]: E0310 01:37:21.736607 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:24.101849 systemd[1]: Started sshd@28-10.0.0.144:22-10.0.0.1:52392.service - OpenSSH per-connection server daemon (10.0.0.1:52392). Mar 10 01:37:24.195415 sshd[4538]: Accepted publickey for core from 10.0.0.1 port 52392 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:24.196602 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:24.212312 systemd-logind[1452]: New session 29 of user core. Mar 10 01:37:24.222624 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:37:24.445088 sshd[4538]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:24.452600 systemd[1]: sshd@28-10.0.0.144:22-10.0.0.1:52392.service: Deactivated successfully. Mar 10 01:37:24.458156 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:37:24.460439 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:37:24.465437 systemd-logind[1452]: Removed session 29. Mar 10 01:37:28.747324 kubelet[2626]: E0310 01:37:28.745586 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:29.491861 systemd[1]: Started sshd@29-10.0.0.144:22-10.0.0.1:52394.service - OpenSSH per-connection server daemon (10.0.0.1:52394). Mar 10 01:37:29.585620 sshd[4552]: Accepted publickey for core from 10.0.0.1 port 52394 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:29.588589 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:29.603560 systemd-logind[1452]: New session 30 of user core. Mar 10 01:37:29.610729 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:37:29.837628 sshd[4552]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:29.843160 systemd[1]: sshd@29-10.0.0.144:22-10.0.0.1:52394.service: Deactivated successfully. Mar 10 01:37:29.847803 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:37:29.858080 systemd-logind[1452]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:37:29.869679 systemd-logind[1452]: Removed session 30. Mar 10 01:37:34.898890 systemd[1]: Started sshd@30-10.0.0.144:22-10.0.0.1:39950.service - OpenSSH per-connection server daemon (10.0.0.1:39950). Mar 10 01:37:35.022631 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 39950 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:35.027574 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:35.045679 systemd-logind[1452]: New session 31 of user core. Mar 10 01:37:35.081359 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:37:35.372467 sshd[4566]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:35.388159 systemd[1]: sshd@30-10.0.0.144:22-10.0.0.1:39950.service: Deactivated successfully. Mar 10 01:37:35.392461 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:37:35.394043 systemd-logind[1452]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:37:35.398840 systemd-logind[1452]: Removed session 31. Mar 10 01:37:40.410098 systemd[1]: Started sshd@31-10.0.0.144:22-10.0.0.1:39962.service - OpenSSH per-connection server daemon (10.0.0.1:39962). Mar 10 01:37:40.491960 sshd[4583]: Accepted publickey for core from 10.0.0.1 port 39962 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:40.503407 sshd[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:40.542317 systemd-logind[1452]: New session 32 of user core. Mar 10 01:37:40.553748 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:37:40.790896 sshd[4583]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:40.809704 systemd[1]: sshd@31-10.0.0.144:22-10.0.0.1:39962.service: Deactivated successfully. Mar 10 01:37:40.813463 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:37:40.816339 systemd-logind[1452]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:37:40.837087 systemd[1]: Started sshd@32-10.0.0.144:22-10.0.0.1:39974.service - OpenSSH per-connection server daemon (10.0.0.1:39974). Mar 10 01:37:40.841396 systemd-logind[1452]: Removed session 32. Mar 10 01:37:40.892012 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 39974 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:40.896666 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:40.919092 systemd-logind[1452]: New session 33 of user core. Mar 10 01:37:40.936677 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:37:42.906947 containerd[1475]: time="2026-03-10T01:37:42.906505989Z" level=info msg="StopContainer for \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\" with timeout 30 (s)" Mar 10 01:37:42.913711 containerd[1475]: time="2026-03-10T01:37:42.913631057Z" level=info msg="Stop container \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\" with signal terminated" Mar 10 01:37:42.963576 systemd[1]: cri-containerd-f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468.scope: Deactivated successfully. Mar 10 01:37:42.964123 systemd[1]: cri-containerd-f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468.scope: Consumed 2.010s CPU time. Mar 10 01:37:43.001319 containerd[1475]: time="2026-03-10T01:37:42.998077909Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:37:43.013763 containerd[1475]: time="2026-03-10T01:37:43.013706561Z" level=info msg="StopContainer for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" with timeout 2 (s)" Mar 10 01:37:43.014758 containerd[1475]: time="2026-03-10T01:37:43.014431183Z" level=info msg="Stop container \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" with signal terminated" Mar 10 01:37:43.025987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468-rootfs.mount: Deactivated successfully. Mar 10 01:37:43.034438 systemd-networkd[1371]: lxc_health: Link DOWN Mar 10 01:37:43.034462 systemd-networkd[1371]: lxc_health: Lost carrier Mar 10 01:37:43.053107 containerd[1475]: time="2026-03-10T01:37:43.053013773Z" level=info msg="shim disconnected" id=f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468 namespace=k8s.io Mar 10 01:37:43.053951 containerd[1475]: time="2026-03-10T01:37:43.053507644Z" level=warning msg="cleaning up after shim disconnected" id=f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468 namespace=k8s.io Mar 10 01:37:43.053951 containerd[1475]: time="2026-03-10T01:37:43.053608282Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:43.083491 systemd[1]: cri-containerd-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449.scope: Deactivated successfully. Mar 10 01:37:43.084201 systemd[1]: cri-containerd-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449.scope: Consumed 21.491s CPU time. Mar 10 01:37:43.129104 containerd[1475]: time="2026-03-10T01:37:43.128990130Z" level=info msg="StopContainer for \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\" returns successfully" Mar 10 01:37:43.135312 containerd[1475]: time="2026-03-10T01:37:43.135037052Z" level=info msg="StopPodSandbox for \"83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf\"" Mar 10 01:37:43.135312 containerd[1475]: time="2026-03-10T01:37:43.135110929Z" level=info msg="Container to stop \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:37:43.138984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449-rootfs.mount: Deactivated successfully. Mar 10 01:37:43.139513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf-shm.mount: Deactivated successfully. Mar 10 01:37:43.171118 systemd[1]: cri-containerd-83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf.scope: Deactivated successfully. Mar 10 01:37:43.203979 containerd[1475]: time="2026-03-10T01:37:43.203510973Z" level=info msg="shim disconnected" id=0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449 namespace=k8s.io Mar 10 01:37:43.203979 containerd[1475]: time="2026-03-10T01:37:43.203571595Z" level=warning msg="cleaning up after shim disconnected" id=0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449 namespace=k8s.io Mar 10 01:37:43.203979 containerd[1475]: time="2026-03-10T01:37:43.203580983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:43.228063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf-rootfs.mount: Deactivated successfully. Mar 10 01:37:43.241587 containerd[1475]: time="2026-03-10T01:37:43.240731727Z" level=info msg="shim disconnected" id=83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf namespace=k8s.io Mar 10 01:37:43.241587 containerd[1475]: time="2026-03-10T01:37:43.240819582Z" level=warning msg="cleaning up after shim disconnected" id=83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf namespace=k8s.io Mar 10 01:37:43.241587 containerd[1475]: time="2026-03-10T01:37:43.240835291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:43.244954 containerd[1475]: time="2026-03-10T01:37:43.244911266Z" level=info msg="StopContainer for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" returns successfully" Mar 10 01:37:43.248305 containerd[1475]: time="2026-03-10T01:37:43.247919359Z" level=info msg="StopPodSandbox for \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\"" Mar 10 01:37:43.248305 containerd[1475]: time="2026-03-10T01:37:43.247984881Z" level=info msg="Container to stop \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:37:43.248305 containerd[1475]: time="2026-03-10T01:37:43.248006983Z" level=info msg="Container to stop \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:37:43.248305 containerd[1475]: time="2026-03-10T01:37:43.248025628Z" level=info msg="Container to stop \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:37:43.248305 containerd[1475]: time="2026-03-10T01:37:43.248040726Z" level=info msg="Container to stop \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:37:43.248305 containerd[1475]: time="2026-03-10T01:37:43.248055974Z" level=info msg="Container to stop \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 10 01:37:43.253532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208-shm.mount: Deactivated successfully. Mar 10 01:37:43.266530 systemd[1]: cri-containerd-28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208.scope: Deactivated successfully. Mar 10 01:37:43.279578 containerd[1475]: time="2026-03-10T01:37:43.279438181Z" level=info msg="TearDown network for sandbox \"83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf\" successfully" Mar 10 01:37:43.279578 containerd[1475]: time="2026-03-10T01:37:43.279508300Z" level=info msg="StopPodSandbox for \"83456541c2a75d29203a877d7b467b0d3fe8458f182c152e0ef10c36fa99edaf\" returns successfully" Mar 10 01:37:43.331467 containerd[1475]: time="2026-03-10T01:37:43.331340370Z" level=info msg="shim disconnected" id=28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208 namespace=k8s.io Mar 10 01:37:43.331467 containerd[1475]: time="2026-03-10T01:37:43.331455645Z" level=warning msg="cleaning up after shim disconnected" id=28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208 namespace=k8s.io Mar 10 01:37:43.331467 containerd[1475]: time="2026-03-10T01:37:43.331470794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:43.361943 containerd[1475]: time="2026-03-10T01:37:43.361767691Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:37:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:37:43.363486 kubelet[2626]: I0310 01:37:43.363386 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0168a63-da04-4162-9ead-609751138a20-cilium-config-path\") pod \"d0168a63-da04-4162-9ead-609751138a20\" (UID: \"d0168a63-da04-4162-9ead-609751138a20\") " Mar 10 01:37:43.365609 kubelet[2626]: I0310 01:37:43.364319 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnj9f\" (UniqueName: \"kubernetes.io/projected/d0168a63-da04-4162-9ead-609751138a20-kube-api-access-lnj9f\") pod \"d0168a63-da04-4162-9ead-609751138a20\" (UID: \"d0168a63-da04-4162-9ead-609751138a20\") " Mar 10 01:37:43.365730 containerd[1475]: time="2026-03-10T01:37:43.365084091Z" level=info msg="TearDown network for sandbox \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" successfully" Mar 10 01:37:43.365730 containerd[1475]: time="2026-03-10T01:37:43.365119477Z" level=info msg="StopPodSandbox for \"28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208\" returns successfully" Mar 10 01:37:43.369918 kubelet[2626]: I0310 01:37:43.369808 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0168a63-da04-4162-9ead-609751138a20-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d0168a63-da04-4162-9ead-609751138a20" (UID: "d0168a63-da04-4162-9ead-609751138a20"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:37:43.376335 kubelet[2626]: I0310 01:37:43.375943 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0168a63-da04-4162-9ead-609751138a20-kube-api-access-lnj9f" (OuterVolumeSpecName: "kube-api-access-lnj9f") pod "d0168a63-da04-4162-9ead-609751138a20" (UID: "d0168a63-da04-4162-9ead-609751138a20"). InnerVolumeSpecName "kube-api-access-lnj9f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:37:43.466441 kubelet[2626]: I0310 01:37:43.466137 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-lib-modules\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468106 kubelet[2626]: I0310 01:37:43.466784 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-net\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468106 kubelet[2626]: I0310 01:37:43.466839 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cni-path\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468106 kubelet[2626]: I0310 01:37:43.466875 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-hubble-tls\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468106 kubelet[2626]: I0310 01:37:43.466900 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aab988c-165e-410e-a954-fb952044c1dc-cilium-config-path\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468106 kubelet[2626]: I0310 01:37:43.466938 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86qtj\" (UniqueName: \"kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-kube-api-access-86qtj\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468106 kubelet[2626]: I0310 01:37:43.466961 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-bpf-maps\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468492 kubelet[2626]: I0310 01:37:43.466983 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-xtables-lock\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468492 kubelet[2626]: I0310 01:37:43.467008 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aab988c-165e-410e-a954-fb952044c1dc-clustermesh-secrets\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468492 kubelet[2626]: I0310 01:37:43.467032 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-cgroup\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468492 kubelet[2626]: I0310 01:37:43.467056 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-kernel\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468492 kubelet[2626]: I0310 01:37:43.467077 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-etc-cni-netd\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468492 kubelet[2626]: I0310 01:37:43.467098 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-hostproc\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468795 kubelet[2626]: I0310 01:37:43.467117 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-run\") pod \"1aab988c-165e-410e-a954-fb952044c1dc\" (UID: \"1aab988c-165e-410e-a954-fb952044c1dc\") " Mar 10 01:37:43.468795 kubelet[2626]: I0310 01:37:43.467349 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0168a63-da04-4162-9ead-609751138a20-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.468795 kubelet[2626]: I0310 01:37:43.467366 2626 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnj9f\" (UniqueName: \"kubernetes.io/projected/d0168a63-da04-4162-9ead-609751138a20-kube-api-access-lnj9f\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.468795 kubelet[2626]: I0310 01:37:43.466426 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.468795 kubelet[2626]: I0310 01:37:43.467426 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.468795 kubelet[2626]: I0310 01:37:43.467507 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469683 kubelet[2626]: I0310 01:37:43.467534 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cni-path" (OuterVolumeSpecName: "cni-path") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469683 kubelet[2626]: I0310 01:37:43.468845 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469683 kubelet[2626]: I0310 01:37:43.468909 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469683 kubelet[2626]: I0310 01:37:43.468939 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469683 kubelet[2626]: I0310 01:37:43.468968 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469897 kubelet[2626]: I0310 01:37:43.469038 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.469897 kubelet[2626]: I0310 01:37:43.469067 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-hostproc" (OuterVolumeSpecName: "hostproc") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 10 01:37:43.475998 kubelet[2626]: I0310 01:37:43.475893 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:37:43.477668 kubelet[2626]: I0310 01:37:43.477544 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1aab988c-165e-410e-a954-fb952044c1dc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:37:43.477779 kubelet[2626]: I0310 01:37:43.477719 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-kube-api-access-86qtj" (OuterVolumeSpecName: "kube-api-access-86qtj") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "kube-api-access-86qtj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:37:43.477898 kubelet[2626]: I0310 01:37:43.477842 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aab988c-165e-410e-a954-fb952044c1dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1aab988c-165e-410e-a954-fb952044c1dc" (UID: "1aab988c-165e-410e-a954-fb952044c1dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:37:43.568291 kubelet[2626]: I0310 01:37:43.568146 2626 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568291 kubelet[2626]: I0310 01:37:43.568291 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568313 2626 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568332 2626 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568347 2626 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568361 2626 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568377 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aab988c-165e-410e-a954-fb952044c1dc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568390 2626 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-86qtj\" (UniqueName: \"kubernetes.io/projected/1aab988c-165e-410e-a954-fb952044c1dc-kube-api-access-86qtj\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568404 2626 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568487 kubelet[2626]: I0310 01:37:43.568416 2626 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568935 kubelet[2626]: I0310 01:37:43.568428 2626 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1aab988c-165e-410e-a954-fb952044c1dc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568935 kubelet[2626]: I0310 01:37:43.568441 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568935 kubelet[2626]: I0310 01:37:43.568457 2626 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.568935 kubelet[2626]: I0310 01:37:43.568470 2626 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1aab988c-165e-410e-a954-fb952044c1dc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 10 01:37:43.736919 kubelet[2626]: E0310 01:37:43.736503 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:43.964365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28b5c95e15b0d3cebb2a3ac080cdd51d27c91fd6eeebe0958fbe601374191208-rootfs.mount: Deactivated successfully. Mar 10 01:37:43.964577 systemd[1]: var-lib-kubelet-pods-1aab988c\x2d165e\x2d410e\x2da954\x2dfb952044c1dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d86qtj.mount: Deactivated successfully. Mar 10 01:37:43.966608 systemd[1]: var-lib-kubelet-pods-1aab988c\x2d165e\x2d410e\x2da954\x2dfb952044c1dc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 10 01:37:43.966783 systemd[1]: var-lib-kubelet-pods-1aab988c\x2d165e\x2d410e\x2da954\x2dfb952044c1dc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 10 01:37:43.966904 systemd[1]: var-lib-kubelet-pods-d0168a63\x2dda04\x2d4162\x2d9ead\x2d609751138a20-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlnj9f.mount: Deactivated successfully. Mar 10 01:37:44.193299 kubelet[2626]: I0310 01:37:44.191550 2626 scope.go:117] "RemoveContainer" containerID="0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449" Mar 10 01:37:44.203470 containerd[1475]: time="2026-03-10T01:37:44.203412493Z" level=info msg="RemoveContainer for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\"" Mar 10 01:37:44.263385 systemd[1]: Removed slice kubepods-burstable-pod1aab988c_165e_410e_a954_fb952044c1dc.slice - libcontainer container kubepods-burstable-pod1aab988c_165e_410e_a954_fb952044c1dc.slice. Mar 10 01:37:44.263542 systemd[1]: kubepods-burstable-pod1aab988c_165e_410e_a954_fb952044c1dc.slice: Consumed 21.762s CPU time. Mar 10 01:37:44.279808 containerd[1475]: time="2026-03-10T01:37:44.278855360Z" level=info msg="RemoveContainer for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" returns successfully" Mar 10 01:37:44.280289 kubelet[2626]: I0310 01:37:44.280199 2626 scope.go:117] "RemoveContainer" containerID="a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab" Mar 10 01:37:44.283312 systemd[1]: Removed slice kubepods-besteffort-podd0168a63_da04_4162_9ead_609751138a20.slice - libcontainer container kubepods-besteffort-podd0168a63_da04_4162_9ead_609751138a20.slice. Mar 10 01:37:44.283477 systemd[1]: kubepods-besteffort-podd0168a63_da04_4162_9ead_609751138a20.slice: Consumed 2.130s CPU time. Mar 10 01:37:44.288775 containerd[1475]: time="2026-03-10T01:37:44.288179188Z" level=info msg="RemoveContainer for \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\"" Mar 10 01:37:44.305870 containerd[1475]: time="2026-03-10T01:37:44.305742628Z" level=info msg="RemoveContainer for \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\" returns successfully" Mar 10 01:37:44.306186 kubelet[2626]: I0310 01:37:44.306145 2626 scope.go:117] "RemoveContainer" containerID="4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145" Mar 10 01:37:44.310963 containerd[1475]: time="2026-03-10T01:37:44.310922776Z" level=info msg="RemoveContainer for \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\"" Mar 10 01:37:44.326295 containerd[1475]: time="2026-03-10T01:37:44.322612854Z" level=info msg="RemoveContainer for \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\" returns successfully" Mar 10 01:37:44.329027 kubelet[2626]: I0310 01:37:44.328729 2626 scope.go:117] "RemoveContainer" containerID="bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932" Mar 10 01:37:44.336661 containerd[1475]: time="2026-03-10T01:37:44.336280692Z" level=info msg="RemoveContainer for \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\"" Mar 10 01:37:44.348446 containerd[1475]: time="2026-03-10T01:37:44.348148605Z" level=info msg="RemoveContainer for \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\" returns successfully" Mar 10 01:37:44.350666 kubelet[2626]: I0310 01:37:44.349901 2626 scope.go:117] "RemoveContainer" containerID="38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292" Mar 10 01:37:44.359283 containerd[1475]: time="2026-03-10T01:37:44.356178710Z" level=info msg="RemoveContainer for \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\"" Mar 10 01:37:44.366684 containerd[1475]: time="2026-03-10T01:37:44.366526474Z" level=info msg="RemoveContainer for \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\" returns successfully" Mar 10 01:37:44.377698 kubelet[2626]: I0310 01:37:44.373844 2626 scope.go:117] "RemoveContainer" containerID="0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449" Mar 10 01:37:44.379096 containerd[1475]: time="2026-03-10T01:37:44.374750784Z" level=error msg="ContainerStatus for \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\": not found" Mar 10 01:37:44.382799 kubelet[2626]: E0310 01:37:44.381485 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\": not found" containerID="0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449" Mar 10 01:37:44.382799 kubelet[2626]: I0310 01:37:44.381554 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449"} err="failed to get container status \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f6f87b396230bdf7aa2724dc12382997053dd343d7c9dae24c522dc50687449\": not found" Mar 10 01:37:44.382799 kubelet[2626]: I0310 01:37:44.381614 2626 scope.go:117] "RemoveContainer" containerID="a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab" Mar 10 01:37:44.386796 containerd[1475]: time="2026-03-10T01:37:44.383403250Z" level=error msg="ContainerStatus for \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\": not found" Mar 10 01:37:44.386796 containerd[1475]: time="2026-03-10T01:37:44.384817369Z" level=error msg="ContainerStatus for \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\": not found" Mar 10 01:37:44.386796 containerd[1475]: time="2026-03-10T01:37:44.385795785Z" level=error msg="ContainerStatus for \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\": not found" Mar 10 01:37:44.386796 containerd[1475]: time="2026-03-10T01:37:44.386368334Z" level=error msg="ContainerStatus for \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\": not found" Mar 10 01:37:44.387035 kubelet[2626]: E0310 01:37:44.384376 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\": not found" containerID="a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab" Mar 10 01:37:44.387035 kubelet[2626]: I0310 01:37:44.384425 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab"} err="failed to get container status \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"a27ada461bd437ff1bdd44d0dadbf9a845080783fc283e1e558ca9b2732259ab\": not found" Mar 10 01:37:44.387035 kubelet[2626]: I0310 01:37:44.384461 2626 scope.go:117] "RemoveContainer" containerID="4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145" Mar 10 01:37:44.387035 kubelet[2626]: E0310 01:37:44.385280 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\": not found" containerID="4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145" Mar 10 01:37:44.387035 kubelet[2626]: I0310 01:37:44.385315 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145"} err="failed to get container status \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d23fe64b9515390e7b405b3524c15c6fe164ce265debc4782b1caea5b162145\": not found" Mar 10 01:37:44.387035 kubelet[2626]: I0310 01:37:44.385345 2626 scope.go:117] "RemoveContainer" containerID="bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932" Mar 10 01:37:44.387377 kubelet[2626]: E0310 01:37:44.386049 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\": not found" containerID="bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932" Mar 10 01:37:44.387377 kubelet[2626]: I0310 01:37:44.386086 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932"} err="failed to get container status \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc020d122e54a7b68cac98b65474174821289949d6752509cc23a4de0cae3932\": not found" Mar 10 01:37:44.387377 kubelet[2626]: I0310 01:37:44.386113 2626 scope.go:117] "RemoveContainer" containerID="38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292" Mar 10 01:37:44.387377 kubelet[2626]: E0310 01:37:44.386503 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\": not found" containerID="38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292" Mar 10 01:37:44.387377 kubelet[2626]: I0310 01:37:44.386535 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292"} err="failed to get container status \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\": rpc error: code = NotFound desc = an error occurred when try to find container \"38ab32b55c473b48e183c26e664b9bf05379bb739a687786161248bab8dfc292\": not found" Mar 10 01:37:44.387377 kubelet[2626]: I0310 01:37:44.386558 2626 scope.go:117] "RemoveContainer" containerID="f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468" Mar 10 01:37:44.398604 containerd[1475]: time="2026-03-10T01:37:44.398056579Z" level=info msg="RemoveContainer for \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\"" Mar 10 01:37:44.415973 containerd[1475]: time="2026-03-10T01:37:44.415746884Z" level=info msg="RemoveContainer for \"f7a99bb7629d8ec5bf3b425b6c9248e66f64d8de4bb58691c5117ce86c7b5468\" returns successfully" Mar 10 01:37:44.724611 kubelet[2626]: E0310 01:37:44.724530 2626 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:37:44.741563 kubelet[2626]: I0310 01:37:44.741402 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aab988c-165e-410e-a954-fb952044c1dc" path="/var/lib/kubelet/pods/1aab988c-165e-410e-a954-fb952044c1dc/volumes" Mar 10 01:37:44.743681 kubelet[2626]: I0310 01:37:44.743594 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0168a63-da04-4162-9ead-609751138a20" path="/var/lib/kubelet/pods/d0168a63-da04-4162-9ead-609751138a20/volumes" Mar 10 01:37:44.788003 sshd[4597]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:44.803142 systemd[1]: sshd@32-10.0.0.144:22-10.0.0.1:39974.service: Deactivated successfully. Mar 10 01:37:44.806176 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:37:44.806588 systemd[1]: session-33.scope: Consumed 1.088s CPU time. Mar 10 01:37:44.808006 systemd-logind[1452]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:37:44.822007 systemd[1]: Started sshd@33-10.0.0.144:22-10.0.0.1:60102.service - OpenSSH per-connection server daemon (10.0.0.1:60102). Mar 10 01:37:44.825839 systemd-logind[1452]: Removed session 33. Mar 10 01:37:44.930725 sshd[4763]: Accepted publickey for core from 10.0.0.1 port 60102 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:44.933787 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:44.946800 systemd-logind[1452]: New session 34 of user core. Mar 10 01:37:44.955471 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:37:45.799200 sshd[4763]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:45.815160 systemd[1]: sshd@33-10.0.0.144:22-10.0.0.1:60102.service: Deactivated successfully. Mar 10 01:37:45.821164 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:37:45.828137 systemd-logind[1452]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:37:45.837933 systemd[1]: Started sshd@34-10.0.0.144:22-10.0.0.1:60112.service - OpenSSH per-connection server daemon (10.0.0.1:60112). Mar 10 01:37:45.841297 systemd-logind[1452]: Removed session 34. Mar 10 01:37:45.849458 kubelet[2626]: I0310 01:37:45.848417 2626 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-10T01:37:45Z","lastTransitionTime":"2026-03-10T01:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 10 01:37:45.919102 sshd[4776]: Accepted publickey for core from 10.0.0.1 port 60112 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:45.923164 sshd[4776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:45.944803 systemd-logind[1452]: New session 35 of user core. Mar 10 01:37:45.954922 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 10 01:37:45.962693 systemd[1]: Created slice kubepods-burstable-podb155d94a_3cb3_4cba_aeee_2c7783582b9d.slice - libcontainer container kubepods-burstable-podb155d94a_3cb3_4cba_aeee_2c7783582b9d.slice. Mar 10 01:37:45.994694 kubelet[2626]: I0310 01:37:45.994549 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-cilium-cgroup\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994694 kubelet[2626]: I0310 01:37:45.994683 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gblzd\" (UniqueName: \"kubernetes.io/projected/b155d94a-3cb3-4cba-aeee-2c7783582b9d-kube-api-access-gblzd\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994951 kubelet[2626]: I0310 01:37:45.994754 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-hostproc\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994951 kubelet[2626]: I0310 01:37:45.994786 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-cni-path\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994951 kubelet[2626]: I0310 01:37:45.994812 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-etc-cni-netd\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994951 kubelet[2626]: I0310 01:37:45.994833 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b155d94a-3cb3-4cba-aeee-2c7783582b9d-cilium-config-path\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994951 kubelet[2626]: I0310 01:37:45.994857 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-bpf-maps\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.994951 kubelet[2626]: I0310 01:37:45.994879 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-xtables-lock\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995294 kubelet[2626]: I0310 01:37:45.994904 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-host-proc-sys-kernel\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995294 kubelet[2626]: I0310 01:37:45.994933 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-cilium-run\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995294 kubelet[2626]: I0310 01:37:45.994962 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b155d94a-3cb3-4cba-aeee-2c7783582b9d-clustermesh-secrets\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995294 kubelet[2626]: I0310 01:37:45.994991 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b155d94a-3cb3-4cba-aeee-2c7783582b9d-cilium-ipsec-secrets\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995294 kubelet[2626]: I0310 01:37:45.995028 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-lib-modules\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995566 kubelet[2626]: I0310 01:37:45.995050 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b155d94a-3cb3-4cba-aeee-2c7783582b9d-host-proc-sys-net\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:45.995566 kubelet[2626]: I0310 01:37:45.995075 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b155d94a-3cb3-4cba-aeee-2c7783582b9d-hubble-tls\") pod \"cilium-vjfh5\" (UID: \"b155d94a-3cb3-4cba-aeee-2c7783582b9d\") " pod="kube-system/cilium-vjfh5" Mar 10 01:37:46.026545 sshd[4776]: pam_unix(sshd:session): session closed for user core Mar 10 01:37:46.047070 systemd[1]: sshd@34-10.0.0.144:22-10.0.0.1:60112.service: Deactivated successfully. Mar 10 01:37:46.050471 systemd[1]: session-35.scope: Deactivated successfully. Mar 10 01:37:46.060745 systemd-logind[1452]: Session 35 logged out. Waiting for processes to exit. Mar 10 01:37:46.076614 systemd[1]: Started sshd@35-10.0.0.144:22-10.0.0.1:60118.service - OpenSSH per-connection server daemon (10.0.0.1:60118). Mar 10 01:37:46.080460 systemd-logind[1452]: Removed session 35. Mar 10 01:37:46.169816 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 60118 ssh2: RSA SHA256:ApINsR2hE/n5EcIFq8gQqQxccKFX8oefpoXcucPhBPc Mar 10 01:37:46.173452 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:37:46.185870 systemd-logind[1452]: New session 36 of user core. Mar 10 01:37:46.196911 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 10 01:37:46.271790 kubelet[2626]: E0310 01:37:46.270904 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:46.273787 containerd[1475]: time="2026-03-10T01:37:46.273718488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjfh5,Uid:b155d94a-3cb3-4cba-aeee-2c7783582b9d,Namespace:kube-system,Attempt:0,}" Mar 10 01:37:46.332434 containerd[1475]: time="2026-03-10T01:37:46.328150786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 10 01:37:46.332434 containerd[1475]: time="2026-03-10T01:37:46.328284085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 10 01:37:46.332434 containerd[1475]: time="2026-03-10T01:37:46.328306657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:37:46.332434 containerd[1475]: time="2026-03-10T01:37:46.328426932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 10 01:37:46.382612 systemd[1]: Started cri-containerd-8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259.scope - libcontainer container 8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259. Mar 10 01:37:46.453569 containerd[1475]: time="2026-03-10T01:37:46.453444864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjfh5,Uid:b155d94a-3cb3-4cba-aeee-2c7783582b9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\"" Mar 10 01:37:46.457323 kubelet[2626]: E0310 01:37:46.457272 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:46.477379 containerd[1475]: time="2026-03-10T01:37:46.477318770Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 10 01:37:46.510414 containerd[1475]: time="2026-03-10T01:37:46.510126021Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3\"" Mar 10 01:37:46.513056 containerd[1475]: time="2026-03-10T01:37:46.511809697Z" level=info msg="StartContainer for \"21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3\"" Mar 10 01:37:46.628622 systemd[1]: Started cri-containerd-21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3.scope - libcontainer container 21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3. Mar 10 01:37:46.709574 containerd[1475]: time="2026-03-10T01:37:46.709180860Z" level=info msg="StartContainer for \"21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3\" returns successfully" Mar 10 01:37:46.737841 systemd[1]: cri-containerd-21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3.scope: Deactivated successfully. Mar 10 01:37:46.855354 containerd[1475]: time="2026-03-10T01:37:46.855201454Z" level=info msg="shim disconnected" id=21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3 namespace=k8s.io Mar 10 01:37:46.856284 containerd[1475]: time="2026-03-10T01:37:46.855741552Z" level=warning msg="cleaning up after shim disconnected" id=21cddf030079e01def2ce54a2506bb97067bfd4ca5d68cf96c2d88ff0b3917a3 namespace=k8s.io Mar 10 01:37:46.856284 containerd[1475]: time="2026-03-10T01:37:46.855770887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:47.242882 kubelet[2626]: E0310 01:37:47.242822 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:47.257539 containerd[1475]: time="2026-03-10T01:37:47.255676904Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 10 01:37:47.338056 containerd[1475]: time="2026-03-10T01:37:47.337909139Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5\"" Mar 10 01:37:47.340930 containerd[1475]: time="2026-03-10T01:37:47.339467016Z" level=info msg="StartContainer for \"99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5\"" Mar 10 01:37:47.446945 systemd[1]: Started cri-containerd-99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5.scope - libcontainer container 99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5. Mar 10 01:37:47.552913 containerd[1475]: time="2026-03-10T01:37:47.552751684Z" level=info msg="StartContainer for \"99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5\" returns successfully" Mar 10 01:37:47.563159 systemd[1]: cri-containerd-99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5.scope: Deactivated successfully. Mar 10 01:37:47.667907 containerd[1475]: time="2026-03-10T01:37:47.666890535Z" level=info msg="shim disconnected" id=99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5 namespace=k8s.io Mar 10 01:37:47.667907 containerd[1475]: time="2026-03-10T01:37:47.666996834Z" level=warning msg="cleaning up after shim disconnected" id=99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5 namespace=k8s.io Mar 10 01:37:47.667907 containerd[1475]: time="2026-03-10T01:37:47.667011801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:48.125294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99408391fc3aad7b8003012f68ac17e3d0bff3081d9c292d5faecec628117ec5-rootfs.mount: Deactivated successfully. Mar 10 01:37:48.252169 kubelet[2626]: E0310 01:37:48.251680 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:48.264758 containerd[1475]: time="2026-03-10T01:37:48.264337173Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 10 01:37:48.319038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602802625.mount: Deactivated successfully. Mar 10 01:37:48.349071 containerd[1475]: time="2026-03-10T01:37:48.348881241Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7\"" Mar 10 01:37:48.350494 containerd[1475]: time="2026-03-10T01:37:48.350403111Z" level=info msg="StartContainer for \"184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7\"" Mar 10 01:37:48.427154 systemd[1]: Started cri-containerd-184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7.scope - libcontainer container 184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7. Mar 10 01:37:48.496012 containerd[1475]: time="2026-03-10T01:37:48.495610629Z" level=info msg="StartContainer for \"184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7\" returns successfully" Mar 10 01:37:48.514463 systemd[1]: cri-containerd-184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7.scope: Deactivated successfully. Mar 10 01:37:48.605375 containerd[1475]: time="2026-03-10T01:37:48.604923037Z" level=info msg="shim disconnected" id=184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7 namespace=k8s.io Mar 10 01:37:48.605375 containerd[1475]: time="2026-03-10T01:37:48.605004589Z" level=warning msg="cleaning up after shim disconnected" id=184c5354fd8e12ae9f39d68e00fd4f2811384c53a23eff1ad2849915b3fa19e7 namespace=k8s.io Mar 10 01:37:48.605375 containerd[1475]: time="2026-03-10T01:37:48.605024337Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:48.641520 containerd[1475]: time="2026-03-10T01:37:48.641389236Z" level=warning msg="cleanup warnings time=\"2026-03-10T01:37:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 10 01:37:49.262789 kubelet[2626]: E0310 01:37:49.262626 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:49.276841 containerd[1475]: time="2026-03-10T01:37:49.275764185Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 10 01:37:49.326445 containerd[1475]: time="2026-03-10T01:37:49.326137776Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674\"" Mar 10 01:37:49.328515 containerd[1475]: time="2026-03-10T01:37:49.328436554Z" level=info msg="StartContainer for \"8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674\"" Mar 10 01:37:49.395682 systemd[1]: Started cri-containerd-8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674.scope - libcontainer container 8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674. Mar 10 01:37:49.458204 systemd[1]: cri-containerd-8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674.scope: Deactivated successfully. Mar 10 01:37:49.462129 containerd[1475]: time="2026-03-10T01:37:49.462058142Z" level=info msg="StartContainer for \"8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674\" returns successfully" Mar 10 01:37:49.531831 containerd[1475]: time="2026-03-10T01:37:49.527863218Z" level=info msg="shim disconnected" id=8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674 namespace=k8s.io Mar 10 01:37:49.531831 containerd[1475]: time="2026-03-10T01:37:49.527955991Z" level=warning msg="cleaning up after shim disconnected" id=8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674 namespace=k8s.io Mar 10 01:37:49.531831 containerd[1475]: time="2026-03-10T01:37:49.527974115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 10 01:37:49.726547 kubelet[2626]: E0310 01:37:49.726406 2626 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 10 01:37:50.136142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8da04ff1ffbf03b84421579d08796fe041fe22fe121cb0371581acb388819674-rootfs.mount: Deactivated successfully. Mar 10 01:37:50.315804 kubelet[2626]: E0310 01:37:50.314124 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:50.325319 containerd[1475]: time="2026-03-10T01:37:50.324546953Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 10 01:37:50.589554 containerd[1475]: time="2026-03-10T01:37:50.587193470Z" level=info msg="CreateContainer within sandbox \"8f8e8fc10bc3d1d538239a864b116f210b42b5acc6220a995804a79793c11259\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b1c48caa26aabf6540c20df7d10fe23e4634c73aa4e3f2023b6316e29f49f65\"" Mar 10 01:37:50.609948 containerd[1475]: time="2026-03-10T01:37:50.609204215Z" level=info msg="StartContainer for \"5b1c48caa26aabf6540c20df7d10fe23e4634c73aa4e3f2023b6316e29f49f65\"" Mar 10 01:37:50.777200 kubelet[2626]: E0310 01:37:50.773827 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:50.866020 systemd[1]: Started cri-containerd-5b1c48caa26aabf6540c20df7d10fe23e4634c73aa4e3f2023b6316e29f49f65.scope - libcontainer container 5b1c48caa26aabf6540c20df7d10fe23e4634c73aa4e3f2023b6316e29f49f65. Mar 10 01:37:51.017471 containerd[1475]: time="2026-03-10T01:37:51.016856726Z" level=info msg="StartContainer for \"5b1c48caa26aabf6540c20df7d10fe23e4634c73aa4e3f2023b6316e29f49f65\" returns successfully" Mar 10 01:37:51.406880 kubelet[2626]: E0310 01:37:51.405180 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:51.570781 kubelet[2626]: I0310 01:37:51.568065 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vjfh5" podStartSLOduration=6.568039338 podStartE2EDuration="6.568039338s" podCreationTimestamp="2026-03-10 01:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:37:51.566185178 +0000 UTC m=+253.556085746" watchObservedRunningTime="2026-03-10 01:37:51.568039338 +0000 UTC m=+253.557939865" Mar 10 01:37:52.513065 kubelet[2626]: E0310 01:37:52.512313 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:37:52.850833 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 10 01:38:11.575541 kubelet[2626]: E0310 01:38:11.569101 2626 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.354s" Mar 10 01:38:12.006945 systemd-networkd[1371]: lxc_health: Link UP Mar 10 01:38:12.034846 systemd-networkd[1371]: lxc_health: Gained carrier Mar 10 01:38:12.935706 kubelet[2626]: E0310 01:38:12.935446 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:13.908376 kubelet[2626]: E0310 01:38:13.907975 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:13.950411 systemd-networkd[1371]: lxc_health: Gained IPv6LL Mar 10 01:38:14.911377 kubelet[2626]: E0310 01:38:14.910453 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:38:18.911016 systemd[1]: run-containerd-runc-k8s.io-5b1c48caa26aabf6540c20df7d10fe23e4634c73aa4e3f2023b6316e29f49f65-runc.dEma1K.mount: Deactivated successfully. Mar 10 01:38:19.156356 sshd[4784]: pam_unix(sshd:session): session closed for user core Mar 10 01:38:19.177092 systemd[1]: sshd@35-10.0.0.144:22-10.0.0.1:60118.service: Deactivated successfully. Mar 10 01:38:19.182095 systemd-logind[1452]: Session 36 logged out. Waiting for processes to exit. Mar 10 01:38:19.190549 systemd[1]: session-36.scope: Deactivated successfully. Mar 10 01:38:19.191193 systemd[1]: session-36.scope: Consumed 2.545s CPU time. Mar 10 01:38:19.196342 systemd-logind[1452]: Removed session 36.